Test Report: Docker_Linux 19530

                    
                      6d579fb1420e6d4e07520b8ad7db429a8522bbcd:2024-08-29:35998
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 72.39
x
+
TestAddons/parallel/Registry (72.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.33019ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-7xs8r" [0bc8f454-eced-450f-ab5d-b961648307b9] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003230347s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-wwxwv" [ef74c371-cbb9-4c81-91dd-dcbc748f81d0] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003548486s
addons_test.go:342: (dbg) Run:  kubectl --context addons-505336 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-505336 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-505336 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.069999559s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-505336 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 ip
2024/08/29 19:09:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-505336
helpers_test.go:235: (dbg) docker inspect addons-505336:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ef4bbc72da474829e04afb352bc954566bb82aaa67e78c02a0071c69b58d9b1",
	        "Created": "2024-08-29T18:56:12.889590417Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 427561,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-29T18:56:12.99355564Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cf9874f1e25d62abde3fdda0022141a8ec82ded75077d073b80dc8f90194cf19",
	        "ResolvConfPath": "/var/lib/docker/containers/8ef4bbc72da474829e04afb352bc954566bb82aaa67e78c02a0071c69b58d9b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ef4bbc72da474829e04afb352bc954566bb82aaa67e78c02a0071c69b58d9b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ef4bbc72da474829e04afb352bc954566bb82aaa67e78c02a0071c69b58d9b1/hosts",
	        "LogPath": "/var/lib/docker/containers/8ef4bbc72da474829e04afb352bc954566bb82aaa67e78c02a0071c69b58d9b1/8ef4bbc72da474829e04afb352bc954566bb82aaa67e78c02a0071c69b58d9b1-json.log",
	        "Name": "/addons-505336",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-505336:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-505336",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/86b0c4497f83061cd04974fda87a0c682723e945a7c6fbd6e5d6e042555617a6-init/diff:/var/lib/docker/overlay2/1f5f6f094bf9cdbb177c01e7ca97214612c2dac25cb3e288a6ba736aeaa2c5c2/diff",
	                "MergedDir": "/var/lib/docker/overlay2/86b0c4497f83061cd04974fda87a0c682723e945a7c6fbd6e5d6e042555617a6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/86b0c4497f83061cd04974fda87a0c682723e945a7c6fbd6e5d6e042555617a6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/86b0c4497f83061cd04974fda87a0c682723e945a7c6fbd6e5d6e042555617a6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-505336",
	                "Source": "/var/lib/docker/volumes/addons-505336/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-505336",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-505336",
	                "name.minikube.sigs.k8s.io": "addons-505336",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd17ded43393cc6869ec5fb504a4afde883f25c7a15d9c8499bb1b97097f66ce",
	            "SandboxKey": "/var/run/docker/netns/fd17ded43393",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32803"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32804"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32807"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32806"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-505336": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "600cb417e83e89112eef30e4731ed0ea8e8bb05979f28293f72fc192bf2a6925",
	                    "EndpointID": "c84db0bff99942b0798ea7668ee6cd30a454078180afe266de15b1f57ee90ce3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-505336",
	                        "8ef4bbc72da4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-505336 -n addons-505336
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-211674                                                                   | download-docker-211674 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-967186   | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | binary-mirror-967186                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42019                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-967186                                                                     | binary-mirror-967186   | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| addons  | disable dashboard -p                                                                        | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | addons-505336                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | addons-505336                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-505336 --wait=true                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:59 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-505336 addons disable                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 18:59 UTC | 29 Aug 24 18:59 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-505336 addons                                                                        | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-505336 addons disable                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ssh     | addons-505336 ssh cat                                                                       | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | /opt/local-path-provisioner/pvc-e7cb1702-6246-4f1e-af32-73da81c1bbe3_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-505336 addons disable                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-505336 addons disable                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | -p addons-505336                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | addons-505336                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | -p addons-505336                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | addons-505336                                                                               |                        |         |         |                     |                     |
	| addons  | addons-505336 addons disable                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-505336 addons                                                                        | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-505336 ssh curl -s                                                                   | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-505336 ip                                                                            | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	| addons  | addons-505336 addons disable                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-505336 addons disable                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-505336 addons                                                                        | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:08 UTC | 29 Aug 24 19:08 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-505336 ip                                                                            | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:09 UTC | 29 Aug 24 19:09 UTC |
	| addons  | addons-505336 addons disable                                                                | addons-505336          | jenkins | v1.33.1 | 29 Aug 24 19:09 UTC | 29 Aug 24 19:09 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:55:49
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:55:49.472236  426808 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:55:49.472455  426808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:49.472463  426808 out.go:358] Setting ErrFile to fd 2...
	I0829 18:55:49.472468  426808 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:49.472611  426808 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
	I0829 18:55:49.473153  426808 out.go:352] Setting JSON to false
	I0829 18:55:49.473954  426808 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":77895,"bootTime":1724879854,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:55:49.474009  426808 start.go:139] virtualization: kvm guest
	I0829 18:55:49.476044  426808 out.go:177] * [addons-505336] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:55:49.477351  426808 notify.go:220] Checking for updates...
	I0829 18:55:49.477363  426808 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 18:55:49.478639  426808 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:55:49.479952  426808 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	I0829 18:55:49.481081  426808 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	I0829 18:55:49.482221  426808 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:55:49.483408  426808 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:55:49.484628  426808 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:55:49.504861  426808 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:55:49.505005  426808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:55:49.548490  426808 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:55:49.540064972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:55:49.548592  426808 docker.go:307] overlay module found
	I0829 18:55:49.550200  426808 out.go:177] * Using the docker driver based on user configuration
	I0829 18:55:49.551457  426808 start.go:297] selected driver: docker
	I0829 18:55:49.551478  426808 start.go:901] validating driver "docker" against <nil>
	I0829 18:55:49.551495  426808 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:55:49.552437  426808 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:55:49.598001  426808 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:55:49.589555401 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:55:49.598155  426808 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:55:49.598348  426808 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:55:49.600079  426808 out.go:177] * Using Docker driver with root privileges
	I0829 18:55:49.601426  426808 cni.go:84] Creating CNI manager for ""
	I0829 18:55:49.601459  426808 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:55:49.601478  426808 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:55:49.601536  426808 start.go:340] cluster config:
	{Name:addons-505336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-505336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:55:49.602917  426808 out.go:177] * Starting "addons-505336" primary control-plane node in "addons-505336" cluster
	I0829 18:55:49.603988  426808 cache.go:121] Beginning downloading kic base image for docker with docker
	I0829 18:55:49.605144  426808 out.go:177] * Pulling base image v0.0.44-1724862063-19530 ...
	I0829 18:55:49.606188  426808 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:55:49.606217  426808 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0829 18:55:49.606233  426808 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19530-418716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0829 18:55:49.606242  426808 cache.go:56] Caching tarball of preloaded images
	I0829 18:55:49.606337  426808 preload.go:172] Found /home/jenkins/minikube-integration/19530-418716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0829 18:55:49.606348  426808 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 18:55:49.606658  426808 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/config.json ...
	I0829 18:55:49.606678  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/config.json: {Name:mk5cb17a28b802907616a2b7f8d0dca7d8c84314 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:55:49.621301  426808 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0829 18:55:49.621406  426808 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0829 18:55:49.621420  426808 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory, skipping pull
	I0829 18:55:49.621425  426808 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 exists in cache, skipping pull
	I0829 18:55:49.621432  426808 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 as a tarball
	I0829 18:55:49.621440  426808 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from local cache
	I0829 18:56:01.263472  426808 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 from cached tarball
	I0829 18:56:01.263509  426808 cache.go:194] Successfully downloaded all kic artifacts
	I0829 18:56:01.263553  426808 start.go:360] acquireMachinesLock for addons-505336: {Name:mkbfe615c89ee2b0f36152d22e96b58565330b4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:56:01.263648  426808 start.go:364] duration metric: took 74.32µs to acquireMachinesLock for "addons-505336"
	I0829 18:56:01.263671  426808 start.go:93] Provisioning new machine with config: &{Name:addons-505336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-505336 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:56:01.263755  426808 start.go:125] createHost starting for "" (driver="docker")
	I0829 18:56:01.265626  426808 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0829 18:56:01.265855  426808 start.go:159] libmachine.API.Create for "addons-505336" (driver="docker")
	I0829 18:56:01.265889  426808 client.go:168] LocalClient.Create starting
	I0829 18:56:01.265971  426808 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19530-418716/.minikube/certs/ca.pem
	I0829 18:56:01.331464  426808 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19530-418716/.minikube/certs/cert.pem
	I0829 18:56:01.723592  426808 cli_runner.go:164] Run: docker network inspect addons-505336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0829 18:56:01.738388  426808 cli_runner.go:211] docker network inspect addons-505336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0829 18:56:01.738451  426808 network_create.go:284] running [docker network inspect addons-505336] to gather additional debugging logs...
	I0829 18:56:01.738473  426808 cli_runner.go:164] Run: docker network inspect addons-505336
	W0829 18:56:01.752651  426808 cli_runner.go:211] docker network inspect addons-505336 returned with exit code 1
	I0829 18:56:01.752679  426808 network_create.go:287] error running [docker network inspect addons-505336]: docker network inspect addons-505336: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-505336 not found
	I0829 18:56:01.752694  426808 network_create.go:289] output of [docker network inspect addons-505336]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-505336 not found
	
	** /stderr **
	I0829 18:56:01.752769  426808 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:56:01.767534  426808 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013588c0}
	I0829 18:56:01.767577  426808 network_create.go:124] attempt to create docker network addons-505336 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0829 18:56:01.767623  426808 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-505336 addons-505336
	I0829 18:56:01.825407  426808 network_create.go:108] docker network addons-505336 192.168.49.0/24 created
	I0829 18:56:01.825437  426808 kic.go:121] calculated static IP "192.168.49.2" for the "addons-505336" container
	I0829 18:56:01.825491  426808 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0829 18:56:01.840317  426808 cli_runner.go:164] Run: docker volume create addons-505336 --label name.minikube.sigs.k8s.io=addons-505336 --label created_by.minikube.sigs.k8s.io=true
	I0829 18:56:01.855892  426808 oci.go:103] Successfully created a docker volume addons-505336
	I0829 18:56:01.855963  426808 cli_runner.go:164] Run: docker run --rm --name addons-505336-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-505336 --entrypoint /usr/bin/test -v addons-505336:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib
	I0829 18:56:08.941593  426808 cli_runner.go:217] Completed: docker run --rm --name addons-505336-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-505336 --entrypoint /usr/bin/test -v addons-505336:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -d /var/lib: (7.085593529s)
	I0829 18:56:08.941623  426808 oci.go:107] Successfully prepared a docker volume addons-505336
	I0829 18:56:08.941645  426808 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:56:08.941671  426808 kic.go:194] Starting extracting preloaded images to volume ...
	I0829 18:56:08.941725  426808 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19530-418716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-505336:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0829 18:56:12.830876  426808 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19530-418716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-505336:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.889113932s)
	I0829 18:56:12.830913  426808 kic.go:203] duration metric: took 3.889239092s to extract preloaded images to volume ...
	W0829 18:56:12.831065  426808 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0829 18:56:12.831178  426808 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0829 18:56:12.872995  426808 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-505336 --name addons-505336 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-505336 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-505336 --network addons-505336 --ip 192.168.49.2 --volume addons-505336:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0
	I0829 18:56:13.160986  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Running}}
	I0829 18:56:13.177979  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:13.195798  426808 cli_runner.go:164] Run: docker exec addons-505336 stat /var/lib/dpkg/alternatives/iptables
	I0829 18:56:13.237707  426808 oci.go:144] the created container "addons-505336" has a running status.
	I0829 18:56:13.237740  426808 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa...
	I0829 18:56:13.347503  426808 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0829 18:56:13.366565  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:13.383873  426808 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0829 18:56:13.383899  426808 kic_runner.go:114] Args: [docker exec --privileged addons-505336 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0829 18:56:13.426772  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:13.447053  426808 machine.go:93] provisionDockerMachine start ...
	I0829 18:56:13.447178  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:13.464011  426808 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:13.464239  426808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0829 18:56:13.464254  426808 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:56:13.464876  426808 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38492->127.0.0.1:32803: read: connection reset by peer
	I0829 18:56:16.586205  426808 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-505336
	
	I0829 18:56:16.586241  426808 ubuntu.go:169] provisioning hostname "addons-505336"
	I0829 18:56:16.586298  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:16.602372  426808 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:16.602569  426808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0829 18:56:16.602584  426808 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-505336 && echo "addons-505336" | sudo tee /etc/hostname
	I0829 18:56:16.729321  426808 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-505336
	
	I0829 18:56:16.729390  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:16.745113  426808 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:16.745348  426808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0829 18:56:16.745371  426808 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-505336' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-505336/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-505336' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:56:16.862621  426808 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:56:16.862652  426808 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19530-418716/.minikube CaCertPath:/home/jenkins/minikube-integration/19530-418716/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19530-418716/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19530-418716/.minikube}
	I0829 18:56:16.862709  426808 ubuntu.go:177] setting up certificates
	I0829 18:56:16.862719  426808 provision.go:84] configureAuth start
	I0829 18:56:16.862767  426808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-505336
	I0829 18:56:16.878066  426808 provision.go:143] copyHostCerts
	I0829 18:56:16.878141  426808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-418716/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19530-418716/.minikube/ca.pem (1078 bytes)
	I0829 18:56:16.878287  426808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-418716/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19530-418716/.minikube/cert.pem (1123 bytes)
	I0829 18:56:16.878353  426808 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19530-418716/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19530-418716/.minikube/key.pem (1675 bytes)
	I0829 18:56:16.878405  426808 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19530-418716/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19530-418716/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19530-418716/.minikube/certs/ca-key.pem org=jenkins.addons-505336 san=[127.0.0.1 192.168.49.2 addons-505336 localhost minikube]
	I0829 18:56:16.986078  426808 provision.go:177] copyRemoteCerts
	I0829 18:56:16.986138  426808 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:56:16.986171  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:17.004188  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:17.090811  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:56:17.111503  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 18:56:17.131929  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:56:17.152258  426808 provision.go:87] duration metric: took 289.525952ms to configureAuth
	I0829 18:56:17.152282  426808 ubuntu.go:193] setting minikube options for container-runtime
	I0829 18:56:17.152454  426808 config.go:182] Loaded profile config "addons-505336": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:56:17.152639  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:17.168078  426808 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:17.168300  426808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0829 18:56:17.168314  426808 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0829 18:56:17.290803  426808 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0829 18:56:17.290829  426808 ubuntu.go:71] root file system type: overlay
	I0829 18:56:17.290938  426808 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0829 18:56:17.290991  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:17.307075  426808 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:17.307243  426808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0829 18:56:17.307304  426808 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0829 18:56:17.433063  426808 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0829 18:56:17.433135  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:17.448758  426808 main.go:141] libmachine: Using SSH client type: native
	I0829 18:56:17.448927  426808 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32803 <nil> <nil>}
	I0829 18:56:17.448943  426808 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0829 18:56:18.109112  426808 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-27 14:13:43.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-29 18:56:17.430331773 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0829 18:56:18.109138  426808 machine.go:96] duration metric: took 4.662053348s to provisionDockerMachine
	I0829 18:56:18.109149  426808 client.go:171] duration metric: took 16.843251987s to LocalClient.Create
	I0829 18:56:18.109168  426808 start.go:167] duration metric: took 16.84331394s to libmachine.API.Create "addons-505336"
	I0829 18:56:18.109178  426808 start.go:293] postStartSetup for "addons-505336" (driver="docker")
	I0829 18:56:18.109196  426808 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:56:18.109251  426808 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:56:18.109296  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:18.125308  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:18.215371  426808 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:56:18.218369  426808 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0829 18:56:18.218403  426808 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0829 18:56:18.218411  426808 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0829 18:56:18.218418  426808 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0829 18:56:18.218428  426808 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-418716/.minikube/addons for local assets ...
	I0829 18:56:18.218484  426808 filesync.go:126] Scanning /home/jenkins/minikube-integration/19530-418716/.minikube/files for local assets ...
	I0829 18:56:18.218509  426808 start.go:296] duration metric: took 109.320975ms for postStartSetup
	I0829 18:56:18.218882  426808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-505336
	I0829 18:56:18.234288  426808 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/config.json ...
	I0829 18:56:18.234535  426808 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:56:18.234590  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:18.249930  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:18.339499  426808 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0829 18:56:18.343611  426808 start.go:128] duration metric: took 17.079841903s to createHost
	I0829 18:56:18.343635  426808 start.go:83] releasing machines lock for "addons-505336", held for 17.079975128s
	I0829 18:56:18.343708  426808 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-505336
	I0829 18:56:18.358635  426808 ssh_runner.go:195] Run: cat /version.json
	I0829 18:56:18.358679  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:18.358714  426808 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:56:18.358790  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:18.373806  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:18.376197  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:18.527058  426808 ssh_runner.go:195] Run: systemctl --version
	I0829 18:56:18.531146  426808 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 18:56:18.534983  426808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0829 18:56:18.556181  426808 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0829 18:56:18.556260  426808 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:56:18.580072  426808 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0829 18:56:18.580099  426808 start.go:495] detecting cgroup driver to use...
	I0829 18:56:18.580132  426808 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:56:18.580242  426808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:56:18.594135  426808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0829 18:56:18.602652  426808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 18:56:18.611000  426808 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0829 18:56:18.611047  426808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0829 18:56:18.619188  426808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:56:18.627298  426808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 18:56:18.635103  426808 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:56:18.643536  426808 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:56:18.651134  426808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 18:56:18.659093  426808 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 18:56:18.667374  426808 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 18:56:18.675638  426808 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:56:18.682531  426808 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:56:18.689606  426808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:18.764743  426808 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0829 18:56:18.856717  426808 start.go:495] detecting cgroup driver to use...
	I0829 18:56:18.856767  426808 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:56:18.856816  426808 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0829 18:56:18.867702  426808 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0829 18:56:18.867774  426808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 18:56:18.878658  426808 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:56:18.893952  426808 ssh_runner.go:195] Run: which cri-dockerd
	I0829 18:56:18.897109  426808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 18:56:18.905495  426808 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0829 18:56:18.921675  426808 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0829 18:56:19.014652  426808 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0829 18:56:19.113584  426808 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0829 18:56:19.113724  426808 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0829 18:56:19.129567  426808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:19.207284  426808 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 18:56:19.457471  426808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 18:56:19.468165  426808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:56:19.478588  426808 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0829 18:56:19.552351  426808 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0829 18:56:19.624515  426808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:19.695986  426808 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0829 18:56:19.708259  426808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:56:19.717891  426808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:19.788321  426808 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0829 18:56:19.846455  426808 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 18:56:19.846559  426808 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0829 18:56:19.849981  426808 start.go:563] Will wait 60s for crictl version
	I0829 18:56:19.850036  426808 ssh_runner.go:195] Run: which crictl
	I0829 18:56:19.853141  426808 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:56:19.883049  426808 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.0
	RuntimeApiVersion:  v1
	I0829 18:56:19.883120  426808 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:56:19.904937  426808 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:56:19.929612  426808 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.2.0 ...
	I0829 18:56:19.929680  426808 cli_runner.go:164] Run: docker network inspect addons-505336 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:56:19.944677  426808 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0829 18:56:19.947966  426808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:56:19.957576  426808 kubeadm.go:883] updating cluster {Name:addons-505336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-505336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:56:19.957706  426808 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:56:19.957762  426808 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:56:19.977077  426808 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 18:56:19.977101  426808 docker.go:615] Images already preloaded, skipping extraction
	I0829 18:56:19.977174  426808 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:56:19.994408  426808 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 18:56:19.994434  426808 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:56:19.994455  426808 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0829 18:56:19.994605  426808 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-505336 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-505336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:56:19.994671  426808 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0829 18:56:20.037716  426808 cni.go:84] Creating CNI manager for ""
	I0829 18:56:20.037748  426808 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:56:20.037767  426808 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:56:20.037800  426808 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-505336 NodeName:addons-505336 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:56:20.037966  426808 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-505336"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:56:20.038041  426808 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:56:20.045924  426808 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:56:20.045993  426808 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:56:20.053335  426808 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 18:56:20.068847  426808 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:56:20.084159  426808 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0829 18:56:20.099126  426808 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0829 18:56:20.102032  426808 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:56:20.111310  426808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:20.192718  426808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:56:20.204662  426808 certs.go:68] Setting up /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336 for IP: 192.168.49.2
	I0829 18:56:20.204681  426808 certs.go:194] generating shared ca certs ...
	I0829 18:56:20.204695  426808 certs.go:226] acquiring lock for ca certs: {Name:mka4e5df4d0f5dd863b35d0a189a931cab4268f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.204810  426808 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19530-418716/.minikube/ca.key
	I0829 18:56:20.285035  426808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-418716/.minikube/ca.crt ...
	I0829 18:56:20.285062  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/ca.crt: {Name:mkdae7ace7bad9837a7e9117447db1af6f4b9c12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.285221  426808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-418716/.minikube/ca.key ...
	I0829 18:56:20.285232  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/ca.key: {Name:mk653c9ed4efa28f38e9fb9d2fd0a214ec7239b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.285299  426808 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19530-418716/.minikube/proxy-client-ca.key
	I0829 18:56:20.487866  426808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-418716/.minikube/proxy-client-ca.crt ...
	I0829 18:56:20.487901  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/proxy-client-ca.crt: {Name:mk3ea79acdff78220a070a6f46c75ff34ede82f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.488091  426808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-418716/.minikube/proxy-client-ca.key ...
	I0829 18:56:20.488112  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/proxy-client-ca.key: {Name:mke177191b156f83a8ba65e0fb9da46a36407c4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.488193  426808 certs.go:256] generating profile certs ...
	I0829 18:56:20.488264  426808 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.key
	I0829 18:56:20.488280  426808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt with IP's: []
	I0829 18:56:20.602812  426808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt ...
	I0829 18:56:20.602838  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: {Name:mk62766a01f914f2f1c3fd2f9596a5754163f207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.602992  426808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.key ...
	I0829 18:56:20.603004  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.key: {Name:mk9b01a96eea1dafc4e09197e12633e073bde04b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.603073  426808 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.key.5b3d1d2d
	I0829 18:56:20.603090  426808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.crt.5b3d1d2d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0829 18:56:20.709507  426808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.crt.5b3d1d2d ...
	I0829 18:56:20.709543  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.crt.5b3d1d2d: {Name:mk5807cc00835d78daa74d0205693bc70130675a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.709710  426808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.key.5b3d1d2d ...
	I0829 18:56:20.709725  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.key.5b3d1d2d: {Name:mk7be2718e8264d68c99ecad8d9bc4db42e631fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.709791  426808 certs.go:381] copying /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.crt.5b3d1d2d -> /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.crt
	I0829 18:56:20.709858  426808 certs.go:385] copying /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.key.5b3d1d2d -> /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.key
	I0829 18:56:20.709904  426808 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/proxy-client.key
	I0829 18:56:20.709920  426808 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/proxy-client.crt with IP's: []
	I0829 18:56:20.888091  426808 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/proxy-client.crt ...
	I0829 18:56:20.888119  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/proxy-client.crt: {Name:mk2e01e96dbf706a41969e36b7af34b93c73e217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.888267  426808 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/proxy-client.key ...
	I0829 18:56:20.888280  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/proxy-client.key: {Name:mk2a869b5b77d7a6c475622ff109e619d913b029 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:20.888440  426808 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-418716/.minikube/certs/ca-key.pem (1679 bytes)
	I0829 18:56:20.888474  426808 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-418716/.minikube/certs/ca.pem (1078 bytes)
	I0829 18:56:20.888498  426808 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-418716/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:56:20.888532  426808 certs.go:484] found cert: /home/jenkins/minikube-integration/19530-418716/.minikube/certs/key.pem (1675 bytes)
	I0829 18:56:20.889204  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:56:20.910475  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:56:20.930979  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:56:20.951306  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:56:20.971589  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:56:20.991477  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:56:21.012003  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:56:21.033235  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:56:21.054411  426808 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19530-418716/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:56:21.074680  426808 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:56:21.089925  426808 ssh_runner.go:195] Run: openssl version
	I0829 18:56:21.094711  426808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:56:21.103053  426808 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:21.106062  426808 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:21.106119  426808 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:56:21.112400  426808 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:56:21.120578  426808 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:56:21.123727  426808 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:56:21.123769  426808 kubeadm.go:392] StartCluster: {Name:addons-505336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-505336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:56:21.123871  426808 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 18:56:21.140346  426808 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:56:21.148140  426808 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:56:21.155884  426808 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0829 18:56:21.155927  426808 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:56:21.163186  426808 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:56:21.163202  426808 kubeadm.go:157] found existing configuration files:
	
	I0829 18:56:21.163235  426808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:56:21.170419  426808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:56:21.170459  426808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:56:21.177906  426808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:56:21.185113  426808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:56:21.185158  426808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:56:21.192714  426808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:56:21.199844  426808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:56:21.199893  426808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:56:21.207115  426808 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:56:21.214181  426808 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:56:21.214222  426808 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:56:21.221214  426808 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0829 18:56:21.252722  426808 kubeadm.go:310] W0829 18:56:21.252027    1922 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:56:21.253093  426808 kubeadm.go:310] W0829 18:56:21.252630    1922 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:56:21.272841  426808 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-gcp\n", err: exit status 1
	I0829 18:56:21.320161  426808 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:56:30.174036  426808 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:56:30.174094  426808 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:56:30.174174  426808 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0829 18:56:30.174242  426808 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-gcp
	I0829 18:56:30.174282  426808 kubeadm.go:310] OS: Linux
	I0829 18:56:30.174377  426808 kubeadm.go:310] CGROUPS_CPU: enabled
	I0829 18:56:30.174463  426808 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0829 18:56:30.174551  426808 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0829 18:56:30.174622  426808 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0829 18:56:30.174698  426808 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0829 18:56:30.174768  426808 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0829 18:56:30.174848  426808 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0829 18:56:30.174897  426808 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0829 18:56:30.174947  426808 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0829 18:56:30.175047  426808 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:56:30.175185  426808 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:56:30.175312  426808 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:56:30.175406  426808 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:56:30.176894  426808 out.go:235]   - Generating certificates and keys ...
	I0829 18:56:30.176995  426808 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:56:30.177090  426808 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:56:30.177154  426808 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:56:30.177234  426808 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:56:30.177333  426808 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:56:30.177392  426808 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:56:30.177438  426808 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:56:30.177586  426808 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-505336 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:56:30.177672  426808 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:56:30.177792  426808 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-505336 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:56:30.177864  426808 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:56:30.177932  426808 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:56:30.177990  426808 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:56:30.178071  426808 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:56:30.178124  426808 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:56:30.178172  426808 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:56:30.178216  426808 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:56:30.178269  426808 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:56:30.178324  426808 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:56:30.178391  426808 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:56:30.178471  426808 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:56:30.179556  426808 out.go:235]   - Booting up control plane ...
	I0829 18:56:30.179654  426808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:56:30.179745  426808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:56:30.179834  426808 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:56:30.179959  426808 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:56:30.180075  426808 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:56:30.180113  426808 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:56:30.180253  426808 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:56:30.180340  426808 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:56:30.180390  426808 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.650816ms
	I0829 18:56:30.180483  426808 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:56:30.180580  426808 kubeadm.go:310] [api-check] The API server is healthy after 4.501290427s
	I0829 18:56:30.180744  426808 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:56:30.180917  426808 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:56:30.181000  426808 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:56:30.181227  426808 kubeadm.go:310] [mark-control-plane] Marking the node addons-505336 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:56:30.181327  426808 kubeadm.go:310] [bootstrap-token] Using token: mazegd.fx0lflllcl8q5igp
	I0829 18:56:30.182583  426808 out.go:235]   - Configuring RBAC rules ...
	I0829 18:56:30.182706  426808 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:56:30.182852  426808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:56:30.183056  426808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:56:30.183243  426808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:56:30.183366  426808 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:56:30.183444  426808 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:56:30.183546  426808 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:56:30.183587  426808 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:56:30.183626  426808 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:56:30.183631  426808 kubeadm.go:310] 
	I0829 18:56:30.183683  426808 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:56:30.183689  426808 kubeadm.go:310] 
	I0829 18:56:30.183753  426808 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:56:30.183759  426808 kubeadm.go:310] 
	I0829 18:56:30.183785  426808 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:56:30.183836  426808 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:56:30.183878  426808 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:56:30.183885  426808 kubeadm.go:310] 
	I0829 18:56:30.183927  426808 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:56:30.183936  426808 kubeadm.go:310] 
	I0829 18:56:30.183977  426808 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:56:30.183983  426808 kubeadm.go:310] 
	I0829 18:56:30.184028  426808 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:56:30.184089  426808 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:56:30.184148  426808 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:56:30.184154  426808 kubeadm.go:310] 
	I0829 18:56:30.184220  426808 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:56:30.184284  426808 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:56:30.184290  426808 kubeadm.go:310] 
	I0829 18:56:30.184382  426808 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mazegd.fx0lflllcl8q5igp \
	I0829 18:56:30.184465  426808 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9bfd7afbd259a94815ec69a6713b021add294d72979ae362a8e14d4b4c83f5e \
	I0829 18:56:30.184490  426808 kubeadm.go:310] 	--control-plane 
	I0829 18:56:30.184496  426808 kubeadm.go:310] 
	I0829 18:56:30.184572  426808 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:56:30.184578  426808 kubeadm.go:310] 
	I0829 18:56:30.184650  426808 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mazegd.fx0lflllcl8q5igp \
	I0829 18:56:30.184776  426808 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e9bfd7afbd259a94815ec69a6713b021add294d72979ae362a8e14d4b4c83f5e 
	I0829 18:56:30.184791  426808 cni.go:84] Creating CNI manager for ""
	I0829 18:56:30.184803  426808 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:56:30.186008  426808 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:56:30.187135  426808 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:56:30.195381  426808 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:56:30.210693  426808 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:56:30.210747  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:30.210798  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-505336 minikube.k8s.io/updated_at=2024_08_29T18_56_30_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033 minikube.k8s.io/name=addons-505336 minikube.k8s.io/primary=true
	I0829 18:56:30.271729  426808 ops.go:34] apiserver oom_adj: -16
	I0829 18:56:30.271754  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:30.771818  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:31.271826  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:31.772347  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:32.272029  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:32.772814  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.272599  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:33.772465  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:34.272051  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:34.772813  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:35.272776  426808 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:56:35.332994  426808 kubeadm.go:1113] duration metric: took 5.122290558s to wait for elevateKubeSystemPrivileges
	I0829 18:56:35.333036  426808 kubeadm.go:394] duration metric: took 14.209271406s to StartCluster
	I0829 18:56:35.333062  426808 settings.go:142] acquiring lock: {Name:mk8d18e194f03e62292d34a90cdfbd838fafb153 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:35.333216  426808 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19530-418716/kubeconfig
	I0829 18:56:35.333607  426808 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/kubeconfig: {Name:mk28ee8c6d7b073bfc329febc7f1844f1691b19d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:56:35.333842  426808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:56:35.333854  426808 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:56:35.333913  426808 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:56:35.334007  426808 addons.go:69] Setting yakd=true in profile "addons-505336"
	I0829 18:56:35.334022  426808 addons.go:69] Setting cloud-spanner=true in profile "addons-505336"
	I0829 18:56:35.334043  426808 addons.go:234] Setting addon yakd=true in "addons-505336"
	I0829 18:56:35.334057  426808 config.go:182] Loaded profile config "addons-505336": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:56:35.334055  426808 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-505336"
	I0829 18:56:35.334064  426808 addons.go:69] Setting metrics-server=true in profile "addons-505336"
	I0829 18:56:35.334078  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.334083  426808 addons.go:69] Setting registry=true in profile "addons-505336"
	I0829 18:56:35.334092  426808 addons.go:69] Setting volcano=true in profile "addons-505336"
	I0829 18:56:35.334094  426808 addons.go:234] Setting addon metrics-server=true in "addons-505336"
	I0829 18:56:35.334103  426808 addons.go:234] Setting addon registry=true in "addons-505336"
	I0829 18:56:35.334111  426808 addons.go:234] Setting addon volcano=true in "addons-505336"
	I0829 18:56:35.334124  426808 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-505336"
	I0829 18:56:35.334127  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.334131  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.334133  426808 addons.go:69] Setting gcp-auth=true in profile "addons-505336"
	I0829 18:56:35.334149  426808 mustload.go:65] Loading cluster: addons-505336
	I0829 18:56:35.334150  426808 addons.go:69] Setting volumesnapshots=true in profile "addons-505336"
	I0829 18:56:35.334154  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.334171  426808 addons.go:234] Setting addon volumesnapshots=true in "addons-505336"
	I0829 18:56:35.334195  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.334335  426808 config.go:182] Loaded profile config "addons-505336": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:56:35.334361  426808 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-505336"
	I0829 18:56:35.334411  426808 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-505336"
	I0829 18:56:35.334571  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.334625  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.334636  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.334642  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.334650  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.334652  426808 addons.go:69] Setting ingress=true in profile "addons-505336"
	I0829 18:56:35.334677  426808 addons.go:234] Setting addon ingress=true in "addons-505336"
	I0829 18:56:35.334690  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.334709  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.334777  426808 addons.go:69] Setting default-storageclass=true in profile "addons-505336"
	I0829 18:56:35.334850  426808 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-505336"
	I0829 18:56:35.335135  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.334068  426808 addons.go:234] Setting addon cloud-spanner=true in "addons-505336"
	I0829 18:56:35.335188  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.335242  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.334126  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.335140  426808 addons.go:69] Setting ingress-dns=true in profile "addons-505336"
	I0829 18:56:35.335369  426808 addons.go:234] Setting addon ingress-dns=true in "addons-505336"
	I0829 18:56:35.335403  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.334642  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.335152  426808 addons.go:69] Setting helm-tiller=true in profile "addons-505336"
	I0829 18:56:35.335771  426808 addons.go:234] Setting addon helm-tiller=true in "addons-505336"
	I0829 18:56:35.335797  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.335822  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.334076  426808 addons.go:69] Setting storage-provisioner=true in profile "addons-505336"
	I0829 18:56:35.335959  426808 addons.go:234] Setting addon storage-provisioner=true in "addons-505336"
	I0829 18:56:35.335982  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.336562  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.334080  426808 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-505336"
	I0829 18:56:35.336659  426808 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-505336"
	I0829 18:56:35.336689  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.335160  426808 addons.go:69] Setting inspektor-gadget=true in profile "addons-505336"
	I0829 18:56:35.336716  426808 addons.go:234] Setting addon inspektor-gadget=true in "addons-505336"
	I0829 18:56:35.336753  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.337102  426808 out.go:177] * Verifying Kubernetes components...
	I0829 18:56:35.337170  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.337130  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.338372  426808 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:56:35.366673  426808 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-505336"
	I0829 18:56:35.366728  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.367411  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.370831  426808 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:56:35.370934  426808 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:56:35.372000  426808 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:56:35.372019  426808 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:56:35.372073  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.372367  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.372801  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.373186  426808 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:56:35.373635  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.374559  426808 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:56:35.374578  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:56:35.374620  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.379028  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.382324  426808 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0829 18:56:35.383498  426808 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0829 18:56:35.384525  426808 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0829 18:56:35.386648  426808 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:56:35.386683  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0829 18:56:35.386739  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.386906  426808 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:56:35.386911  426808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:56:35.388006  426808 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:56:35.388024  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:56:35.388077  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.389339  426808 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:56:35.390302  426808 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:56:35.391256  426808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:56:35.392178  426808 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:56:35.392267  426808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:56:35.393998  426808 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:56:35.394013  426808 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:56:35.394069  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.398813  426808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:56:35.401987  426808 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:56:35.402096  426808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:56:35.403785  426808 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:56:35.403804  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:56:35.403856  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.405394  426808 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:56:35.406443  426808 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:56:35.406462  426808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:56:35.406515  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.429902  426808 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:56:35.430935  426808 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:56:35.432003  426808 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:56:35.432020  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:56:35.432074  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.434592  426808 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:56:35.435733  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.437368  426808 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:35.438759  426808 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:35.440864  426808 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:56:35.440885  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:56:35.440938  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.444419  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.448881  426808 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:56:35.448918  426808 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:56:35.449324  426808 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:56:35.450393  426808 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:56:35.450411  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:56:35.450464  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.450869  426808 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:56:35.450888  426808 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:56:35.450942  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.451093  426808 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:56:35.451104  426808 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:56:35.451144  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.454141  426808 addons.go:234] Setting addon default-storageclass=true in "addons-505336"
	I0829 18:56:35.454180  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:35.454744  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:35.458186  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.458949  426808 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:56:35.460035  426808 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:56:35.462929  426808 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:56:35.462949  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:56:35.463005  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.463640  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.463700  426808 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:56:35.463711  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:56:35.463753  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:35.469266  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.484165  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.484641  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.509616  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.511539  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.512060  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.512075  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.514659  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.516044  426808 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:56:35.516063  426808 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:56:35.516116  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	W0829 18:56:35.516329  426808 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0829 18:56:35.516362  426808 retry.go:31] will retry after 328.783396ms: ssh: handshake failed: EOF
	I0829 18:56:35.517173  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.524846  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.535321  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:35.681881  426808 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:56:35.682023  426808 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:56:35.802328  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:56:35.897615  426808 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:56:35.897716  426808 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:56:36.091778  426808 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:56:36.091811  426808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:56:36.183122  426808 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:56:36.183157  426808 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:56:36.191400  426808 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:56:36.191425  426808 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:56:36.195855  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:56:36.201633  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:56:36.276795  426808 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:56:36.276887  426808 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:56:36.279960  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:56:36.280789  426808 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:56:36.280844  426808 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:56:36.294225  426808 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:56:36.294304  426808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:56:36.294905  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:56:36.296437  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:56:36.377509  426808 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:56:36.377541  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:56:36.377859  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:56:36.380438  426808 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:56:36.380470  426808 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:56:36.481851  426808 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:56:36.481933  426808 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:56:36.587189  426808 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:56:36.587266  426808 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:56:36.594523  426808 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:56:36.594551  426808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:56:36.684995  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:56:36.688956  426808 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:56:36.689054  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:56:36.695605  426808 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:56:36.695674  426808 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:56:36.696903  426808 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:56:36.696992  426808 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:56:36.781703  426808 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:56:36.781792  426808 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:56:36.977960  426808 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:56:36.978052  426808 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:56:36.994014  426808 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:56:36.994099  426808 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:56:37.178026  426808 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:56:37.178099  426808 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:56:37.195123  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:56:37.294310  426808 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:56:37.294392  426808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:56:37.296863  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:56:37.479354  426808 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.797428008s)
	I0829 18:56:37.479562  426808 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0829 18:56:37.479470  426808 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.797418293s)
	I0829 18:56:37.480795  426808 node_ready.go:35] waiting up to 6m0s for node "addons-505336" to be "Ready" ...
	I0829 18:56:37.485158  426808 node_ready.go:49] node "addons-505336" has status "Ready":"True"
	I0829 18:56:37.485244  426808 node_ready.go:38] duration metric: took 4.371801ms for node "addons-505336" to be "Ready" ...
	I0829 18:56:37.485279  426808 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:56:37.493580  426808 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace to be "Ready" ...
	I0829 18:56:37.584164  426808 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:56:37.584194  426808 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:56:37.679167  426808 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:56:37.679194  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:56:37.777704  426808 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:56:37.777734  426808 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:56:37.880553  426808 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:56:37.880644  426808 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:56:37.881638  426808 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:56:37.881700  426808 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:56:37.994980  426808 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-505336" context rescaled to 1 replicas
	I0829 18:56:38.199017  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:56:38.277601  426808 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:56:38.277700  426808 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:56:38.281835  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:56:38.480630  426808 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:56:38.480733  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:56:38.485060  426808 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:38.485138  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:56:38.776468  426808 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:56:38.776503  426808 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:56:38.980619  426808 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:56:38.980716  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:56:39.180002  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:39.282607  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.086713141s)
	I0829 18:56:39.282739  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.081074131s)
	I0829 18:56:39.282920  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.480556806s)
	I0829 18:56:39.583927  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:39.686478  426808 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:56:39.686571  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:56:39.798983  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:56:40.292223  426808 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:56:40.292317  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:56:41.080835  426808 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:56:41.080918  426808 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:56:41.592340  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:56:42.077672  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:42.388913  426808 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:56:42.389062  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:42.411321  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:43.583533  426808 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:56:43.794757  426808 addons.go:234] Setting addon gcp-auth=true in "addons-505336"
	I0829 18:56:43.794841  426808 host.go:66] Checking if "addons-505336" exists ...
	I0829 18:56:43.795342  426808 cli_runner.go:164] Run: docker container inspect addons-505336 --format={{.State.Status}}
	I0829 18:56:43.813591  426808 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:56:43.813642  426808 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-505336
	I0829 18:56:43.828758  426808 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/addons-505336/id_rsa Username:docker}
	I0829 18:56:44.083443  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:46.498959  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:47.887540  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.607499979s)
	I0829 18:56:47.887751  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.592788253s)
	I0829 18:56:47.887791  426808 addons.go:475] Verifying addon ingress=true in "addons-505336"
	I0829 18:56:47.888042  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.591532083s)
	I0829 18:56:47.888288  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.693054549s)
	I0829 18:56:47.888355  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.591418267s)
	I0829 18:56:47.888369  426808 addons.go:475] Verifying addon registry=true in "addons-505336"
	I0829 18:56:47.888164  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.510271339s)
	I0829 18:56:47.888214  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.203195522s)
	I0829 18:56:47.888573  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.689515456s)
	I0829 18:56:47.888692  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.6067637s)
	I0829 18:56:47.888727  426808 addons.go:475] Verifying addon metrics-server=true in "addons-505336"
	I0829 18:56:47.888828  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.708739548s)
	W0829 18:56:47.888867  426808 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:56:47.888892  426808 retry.go:31] will retry after 243.453168ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:56:47.889021  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.08993811s)
	I0829 18:56:47.889557  426808 out.go:177] * Verifying registry addon...
	I0829 18:56:47.890460  426808 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-505336 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:56:47.890560  426808 out.go:177] * Verifying ingress addon...
	I0829 18:56:47.892602  426808 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:56:47.893804  426808 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:56:47.898437  426808 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:56:47.898458  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:47.899780  426808 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:56:47.899802  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0829 18:56:47.980471  426808 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0829 18:56:48.133319  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:56:48.398857  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:48.476098  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:48.588637  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:48.901001  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:48.901892  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:49.400521  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:49.400869  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:49.575965  426808 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.762339839s)
	I0829 18:56:49.576177  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.983463959s)
	I0829 18:56:49.576264  426808 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-505336"
	I0829 18:56:49.577562  426808 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:56:49.578680  426808 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:56:49.583749  426808 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:56:49.584846  426808 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:56:49.585520  426808 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:56:49.585550  426808 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:56:49.591140  426808 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:56:49.591206  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:49.685027  426808 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:56:49.685054  426808 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:56:49.708214  426808 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:56:49.708244  426808 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:56:49.793075  426808 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:56:49.898356  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:49.899061  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:50.091393  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.397459  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:50.398594  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:50.590749  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:50.790701  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.657284517s)
	I0829 18:56:50.896334  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:50.898257  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:51.000053  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:51.089146  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:51.104637  426808 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.311516707s)
	I0829 18:56:51.106502  426808 addons.go:475] Verifying addon gcp-auth=true in "addons-505336"
	I0829 18:56:51.108092  426808 out.go:177] * Verifying gcp-auth addon...
	I0829 18:56:51.110152  426808 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:56:51.187977  426808 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:56:51.396594  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:51.397609  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:51.590110  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:51.896825  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:51.897950  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:52.089969  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:52.397058  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:52.397784  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:52.590007  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:52.896777  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:52.897554  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:53.089799  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:53.396768  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:53.398020  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:53.499100  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:53.589725  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:53.896900  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:53.897957  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:54.089655  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:54.396261  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:54.397422  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:54.588823  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:54.898007  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:54.898354  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:55.090652  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:55.397114  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:55.397572  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:55.499388  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:55.589233  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:55.896412  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:55.897369  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.089084  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:56.396454  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:56.397607  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:56.589497  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:56.896715  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:56.897272  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:57.089500  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:57.449551  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:57.450116  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:57.588589  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:57.896689  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:57.897706  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:57.999622  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:56:58.089044  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:58.396642  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:58.397199  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:58.590410  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:58.896721  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:58.897884  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:59.090145  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:59.396159  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:59.397944  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:56:59.590255  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:56:59.896168  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:56:59.898172  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:00.000308  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:00.090776  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:00.396121  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:00.398200  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:00.590030  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:00.898207  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:00.898602  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:01.090080  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:01.396073  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:01.397624  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:01.589669  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:01.896770  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:01.897336  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:02.090220  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:02.396668  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:02.398358  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:02.499610  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:02.589935  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:02.896500  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:02.898152  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:03.090153  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:03.396322  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:03.398029  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:03.589075  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:03.895893  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:03.897857  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:04.090489  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:04.397092  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:04.398044  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:04.501680  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:04.590582  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:04.896201  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:04.897389  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:05.089941  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:05.397440  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:05.398331  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:05.589727  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:05.897040  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:05.897901  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:06.089733  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:06.396286  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:06.398417  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:06.590070  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:06.896276  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:06.898162  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:06.999092  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:07.088755  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:07.396255  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:07.399197  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:07.589829  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:07.896980  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:07.897422  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:08.090101  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:08.397121  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:08.397653  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:08.589408  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:08.897262  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:08.897849  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:08.999742  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:09.090094  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:09.396857  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:09.397843  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:09.589274  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:09.897114  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:09.897768  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:10.089644  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:10.396666  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:10.397801  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:10.589385  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:10.896137  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:10.897205  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:11.000452  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:11.089351  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:11.396304  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:57:11.398016  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:11.589549  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:11.897400  426808 kapi.go:107] duration metric: took 24.004796963s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:57:11.897875  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:12.090003  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:12.456228  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:12.590527  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:12.903269  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:13.000977  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:13.089964  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:13.397957  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:13.590415  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:13.898191  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:14.089147  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:14.399103  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:14.590227  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:14.901536  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:15.002801  426808 pod_ready.go:103] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"False"
	I0829 18:57:15.089915  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:15.398495  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:15.503433  426808 pod_ready.go:93] pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace has status "Ready":"True"
	I0829 18:57:15.503462  426808 pod_ready.go:82] duration metric: took 38.009790923s for pod "coredns-6f6b679f8f-2c86p" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.503491  426808 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-9ljnh" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.505070  426808 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-9ljnh" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-9ljnh" not found
	I0829 18:57:15.505095  426808 pod_ready.go:82] duration metric: took 1.595594ms for pod "coredns-6f6b679f8f-9ljnh" in "kube-system" namespace to be "Ready" ...
	E0829 18:57:15.505107  426808 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-9ljnh" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-9ljnh" not found
	I0829 18:57:15.505115  426808 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-505336" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.509431  426808 pod_ready.go:93] pod "etcd-addons-505336" in "kube-system" namespace has status "Ready":"True"
	I0829 18:57:15.509451  426808 pod_ready.go:82] duration metric: took 4.327516ms for pod "etcd-addons-505336" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.509462  426808 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-505336" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.514278  426808 pod_ready.go:93] pod "kube-apiserver-addons-505336" in "kube-system" namespace has status "Ready":"True"
	I0829 18:57:15.514298  426808 pod_ready.go:82] duration metric: took 4.828284ms for pod "kube-apiserver-addons-505336" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.514309  426808 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-505336" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.518115  426808 pod_ready.go:93] pod "kube-controller-manager-addons-505336" in "kube-system" namespace has status "Ready":"True"
	I0829 18:57:15.518137  426808 pod_ready.go:82] duration metric: took 3.820399ms for pod "kube-controller-manager-addons-505336" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.518150  426808 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kj5d4" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.589526  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:15.697320  426808 pod_ready.go:93] pod "kube-proxy-kj5d4" in "kube-system" namespace has status "Ready":"True"
	I0829 18:57:15.697341  426808 pod_ready.go:82] duration metric: took 179.183733ms for pod "kube-proxy-kj5d4" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.697350  426808 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-505336" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:15.898213  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:16.090330  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:16.097781  426808 pod_ready.go:93] pod "kube-scheduler-addons-505336" in "kube-system" namespace has status "Ready":"True"
	I0829 18:57:16.097809  426808 pod_ready.go:82] duration metric: took 400.447898ms for pod "kube-scheduler-addons-505336" in "kube-system" namespace to be "Ready" ...
	I0829 18:57:16.097819  426808 pod_ready.go:39] duration metric: took 38.612494425s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:57:16.097842  426808 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:57:16.097898  426808 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:57:16.116949  426808 api_server.go:72] duration metric: took 40.783056601s to wait for apiserver process to appear ...
	I0829 18:57:16.116974  426808 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:57:16.116994  426808 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0829 18:57:16.121123  426808 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0829 18:57:16.121968  426808 api_server.go:141] control plane version: v1.31.0
	I0829 18:57:16.121993  426808 api_server.go:131] duration metric: took 5.009702ms to wait for apiserver health ...
	I0829 18:57:16.122003  426808 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:57:16.303461  426808 system_pods.go:59] 18 kube-system pods found
	I0829 18:57:16.303501  426808 system_pods.go:61] "coredns-6f6b679f8f-2c86p" [f67d4ad9-1590-4d60-bc65-99d35dc95936] Running
	I0829 18:57:16.303513  426808 system_pods.go:61] "csi-hostpath-attacher-0" [775fcb58-3d17-43c4-b475-7649fcd6e015] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:57:16.303520  426808 system_pods.go:61] "csi-hostpath-resizer-0" [16bfb94a-a8a7-4334-bcbe-7bc1dd42061d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:57:16.303528  426808 system_pods.go:61] "csi-hostpathplugin-p6tdr" [f1ddb0db-9983-4573-832c-e588b9bad378] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:57:16.303534  426808 system_pods.go:61] "etcd-addons-505336" [fffeff5b-44c0-4db0-8734-7e95b2a2793f] Running
	I0829 18:57:16.303540  426808 system_pods.go:61] "kube-apiserver-addons-505336" [a850d56d-bf42-4cb1-99ae-482755049de1] Running
	I0829 18:57:16.303547  426808 system_pods.go:61] "kube-controller-manager-addons-505336" [a960f230-49c7-4378-ada6-812b94c4eb93] Running
	I0829 18:57:16.303553  426808 system_pods.go:61] "kube-ingress-dns-minikube" [9b161863-95d0-4562-9c1a-9ebb464a71f7] Running
	I0829 18:57:16.303558  426808 system_pods.go:61] "kube-proxy-kj5d4" [03308cc4-469b-465a-bb58-8585e006fb32] Running
	I0829 18:57:16.303562  426808 system_pods.go:61] "kube-scheduler-addons-505336" [1bd1dd8f-1518-4b86-a726-ea86fd6a8b8b] Running
	I0829 18:57:16.303565  426808 system_pods.go:61] "metrics-server-8988944d9-brsc8" [edb7fd7c-feae-493c-8e00-a0629ed0235b] Running
	I0829 18:57:16.303568  426808 system_pods.go:61] "nvidia-device-plugin-daemonset-vjlkn" [56fd832b-ce49-4077-95a5-fc21a8abdc0d] Running
	I0829 18:57:16.303572  426808 system_pods.go:61] "registry-6fb4cdfc84-7xs8r" [0bc8f454-eced-450f-ab5d-b961648307b9] Running
	I0829 18:57:16.303579  426808 system_pods.go:61] "registry-proxy-wwxwv" [ef74c371-cbb9-4c81-91dd-dcbc748f81d0] Running
	I0829 18:57:16.303588  426808 system_pods.go:61] "snapshot-controller-56fcc65765-5zcjx" [9fbc68ee-15f3-4427-a51a-61672bef2410] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:57:16.303600  426808 system_pods.go:61] "snapshot-controller-56fcc65765-qp5gp" [7b2a8697-1afe-40de-a372-01913a79e7bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:57:16.303609  426808 system_pods.go:61] "storage-provisioner" [617760c7-e1b6-4f22-bd1c-a7bece49bde3] Running
	I0829 18:57:16.303615  426808 system_pods.go:61] "tiller-deploy-b48cc5f79-7x5zs" [32e00d4f-af90-4a61-9f18-048d53bb045d] Running
	I0829 18:57:16.303623  426808 system_pods.go:74] duration metric: took 181.612484ms to wait for pod list to return data ...
	I0829 18:57:16.303634  426808 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:57:16.398599  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:16.497695  426808 default_sa.go:45] found service account: "default"
	I0829 18:57:16.497719  426808 default_sa.go:55] duration metric: took 194.077477ms for default service account to be created ...
	I0829 18:57:16.497729  426808 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:57:16.589686  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:16.704475  426808 system_pods.go:86] 18 kube-system pods found
	I0829 18:57:16.704516  426808 system_pods.go:89] "coredns-6f6b679f8f-2c86p" [f67d4ad9-1590-4d60-bc65-99d35dc95936] Running
	I0829 18:57:16.704531  426808 system_pods.go:89] "csi-hostpath-attacher-0" [775fcb58-3d17-43c4-b475-7649fcd6e015] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:57:16.704540  426808 system_pods.go:89] "csi-hostpath-resizer-0" [16bfb94a-a8a7-4334-bcbe-7bc1dd42061d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:57:16.704551  426808 system_pods.go:89] "csi-hostpathplugin-p6tdr" [f1ddb0db-9983-4573-832c-e588b9bad378] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:57:16.704557  426808 system_pods.go:89] "etcd-addons-505336" [fffeff5b-44c0-4db0-8734-7e95b2a2793f] Running
	I0829 18:57:16.704563  426808 system_pods.go:89] "kube-apiserver-addons-505336" [a850d56d-bf42-4cb1-99ae-482755049de1] Running
	I0829 18:57:16.704570  426808 system_pods.go:89] "kube-controller-manager-addons-505336" [a960f230-49c7-4378-ada6-812b94c4eb93] Running
	I0829 18:57:16.704581  426808 system_pods.go:89] "kube-ingress-dns-minikube" [9b161863-95d0-4562-9c1a-9ebb464a71f7] Running
	I0829 18:57:16.704587  426808 system_pods.go:89] "kube-proxy-kj5d4" [03308cc4-469b-465a-bb58-8585e006fb32] Running
	I0829 18:57:16.704595  426808 system_pods.go:89] "kube-scheduler-addons-505336" [1bd1dd8f-1518-4b86-a726-ea86fd6a8b8b] Running
	I0829 18:57:16.704601  426808 system_pods.go:89] "metrics-server-8988944d9-brsc8" [edb7fd7c-feae-493c-8e00-a0629ed0235b] Running
	I0829 18:57:16.704612  426808 system_pods.go:89] "nvidia-device-plugin-daemonset-vjlkn" [56fd832b-ce49-4077-95a5-fc21a8abdc0d] Running
	I0829 18:57:16.704620  426808 system_pods.go:89] "registry-6fb4cdfc84-7xs8r" [0bc8f454-eced-450f-ab5d-b961648307b9] Running
	I0829 18:57:16.704628  426808 system_pods.go:89] "registry-proxy-wwxwv" [ef74c371-cbb9-4c81-91dd-dcbc748f81d0] Running
	I0829 18:57:16.704638  426808 system_pods.go:89] "snapshot-controller-56fcc65765-5zcjx" [9fbc68ee-15f3-4427-a51a-61672bef2410] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:57:16.704649  426808 system_pods.go:89] "snapshot-controller-56fcc65765-qp5gp" [7b2a8697-1afe-40de-a372-01913a79e7bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:57:16.704656  426808 system_pods.go:89] "storage-provisioner" [617760c7-e1b6-4f22-bd1c-a7bece49bde3] Running
	I0829 18:57:16.704665  426808 system_pods.go:89] "tiller-deploy-b48cc5f79-7x5zs" [32e00d4f-af90-4a61-9f18-048d53bb045d] Running
	I0829 18:57:16.704676  426808 system_pods.go:126] duration metric: took 206.938947ms to wait for k8s-apps to be running ...
	I0829 18:57:16.704691  426808 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:57:16.704743  426808 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:57:16.718811  426808 system_svc.go:56] duration metric: took 14.091997ms WaitForService to wait for kubelet
	I0829 18:57:16.718886  426808 kubeadm.go:582] duration metric: took 41.384996745s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:57:16.718936  426808 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:57:16.897100  426808 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0829 18:57:16.897132  426808 node_conditions.go:123] node cpu capacity is 8
	I0829 18:57:16.897145  426808 node_conditions.go:105] duration metric: took 178.203156ms to run NodePressure ...
	I0829 18:57:16.897160  426808 start.go:241] waiting for startup goroutines ...
	I0829 18:57:16.897804  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:17.090245  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:17.398329  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:17.590135  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:17.899518  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:18.089925  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:18.398215  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:18.679090  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:18.897765  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:19.089882  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:19.400767  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:19.589801  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:19.898545  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:20.090711  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:20.399108  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:20.589422  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:20.929298  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:21.090673  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:21.399092  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:21.589913  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:21.897999  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:22.089348  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:22.398492  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:22.590200  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:22.898262  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:23.089992  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:23.397897  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:23.588880  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:23.897230  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:24.092962  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:24.397868  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:24.589467  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:24.897916  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:25.089186  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:25.398296  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:25.590111  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:25.898190  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:26.097593  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:26.398218  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:26.589754  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:26.898561  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:27.089993  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:27.398873  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:27.589334  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:27.898081  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:28.089095  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:28.398597  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:28.590603  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:28.897834  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:29.092101  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:29.398592  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:29.590502  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:29.897929  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:30.089994  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:30.398496  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:30.589545  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:30.898576  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:31.090193  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:31.397864  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:31.589122  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:31.898341  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:32.090037  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:32.398161  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:32.590329  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:32.898608  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:33.090329  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:33.398333  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:33.589742  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:33.898627  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:34.090444  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:34.399356  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:34.589953  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:57:34.898212  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:35.089327  426808 kapi.go:107] duration metric: took 45.504479426s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:57:35.398099  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:35.897573  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:36.398587  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:36.897668  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:37.398998  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:37.897928  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:38.397299  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:38.897833  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:39.398249  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:39.898158  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:40.397946  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:40.897562  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.399652  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:41.897748  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:42.397861  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:42.897932  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:43.397751  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:43.897395  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:44.397680  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:44.899724  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:45.398136  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:45.898283  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.397638  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:46.897469  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:47.397729  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:47.897727  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.398340  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:48.898027  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:49.398540  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:49.898450  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:50.399497  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:50.898713  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:51.399394  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:51.899282  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:52.398332  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:52.898730  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:53.399357  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:53.899088  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:54.583660  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:54.897898  426808 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:57:55.398431  426808 kapi.go:107] duration metric: took 1m7.504622689s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:58:14.613355  426808 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:58:14.613380  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:15.113627  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:15.614208  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:16.113699  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:16.613894  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:17.113753  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:17.613865  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:18.114269  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:18.613283  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:19.113694  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:19.614202  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:20.113718  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:20.614014  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:21.113043  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:21.614172  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:22.113515  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:22.613771  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:23.113678  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:23.613757  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:24.113581  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:24.613517  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:25.113357  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:25.613668  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:26.113644  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:26.613468  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:27.113589  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:27.613964  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:28.114064  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:28.612977  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:29.114111  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:29.614497  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:30.113459  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:30.613524  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:31.113430  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:31.613677  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:32.113699  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:32.614248  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:33.113149  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:33.613187  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:34.113161  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:34.613073  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:35.113119  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:35.613675  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:36.113666  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:36.613748  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:37.113860  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:37.614065  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:38.113232  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:38.613262  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:39.113299  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:39.614359  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:40.113223  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:40.613659  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:41.113378  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:41.613924  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:42.115913  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:42.614132  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:43.113597  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:43.613054  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:44.112905  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:44.614063  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:45.113033  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:45.613829  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:46.114034  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:46.613328  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:47.113206  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:47.613395  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:48.113525  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:48.612967  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:49.114286  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:49.613909  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:50.113887  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:50.613864  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:51.114885  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:51.613982  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:52.114218  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:52.613592  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:53.113932  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:53.613871  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:54.113726  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:54.614467  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:55.113802  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:55.614185  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:56.113042  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:56.613437  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:57.113882  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:57.614093  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:58.113879  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:58.614359  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:59.113845  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:58:59.613617  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:00.113994  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:00.614104  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:01.114365  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:01.613521  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:02.113748  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:02.614151  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:03.113079  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:03.613178  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:04.113006  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:04.613299  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:05.113732  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:05.614155  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:06.114001  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:06.613745  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:07.113758  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:07.614005  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:08.112879  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:08.613981  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:09.114306  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:09.613872  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:10.113505  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:10.613408  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:11.114032  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:11.613363  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:12.113269  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:12.613551  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:13.113691  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:13.614000  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:14.114364  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:14.613710  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:15.113717  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:15.613788  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:16.114013  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:16.614193  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:17.113363  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:17.614005  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:18.114276  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:18.613773  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:19.114238  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:19.614199  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:20.113278  426808 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:59:20.613405  426808 kapi.go:107] duration metric: took 2m29.503248572s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:59:20.615384  426808 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-505336 cluster.
	I0829 18:59:20.616849  426808 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:59:20.618227  426808 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:59:20.619712  426808 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, volcano, helm-tiller, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0829 18:59:20.620897  426808 addons.go:510] duration metric: took 2m45.286986995s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner volcano helm-tiller ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0829 18:59:20.620941  426808 start.go:246] waiting for cluster config update ...
	I0829 18:59:20.620959  426808 start.go:255] writing updated cluster config ...
	I0829 18:59:20.621217  426808 ssh_runner.go:195] Run: rm -f paused
	I0829 18:59:20.670441  426808 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:59:20.672068  426808 out.go:177] * Done! kubectl is now configured to use "addons-505336" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 29 19:08:40 addons-505336 dockerd[1340]: time="2024-08-29T19:08:40.596661177Z" level=info msg="ignoring event" container=9fa5d30490d42de5252f99c274014ef5b5e4157a936bada33bba952f1d8f788d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:40 addons-505336 dockerd[1340]: time="2024-08-29T19:08:40.599733256Z" level=info msg="ignoring event" container=bc8552c035b7cef312339dda09dd52658efb9ad7452b0f22ad0e58410796a4ad module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:40 addons-505336 dockerd[1340]: time="2024-08-29T19:08:40.601813741Z" level=info msg="ignoring event" container=6ba3cd71e3f182ba2b42560ddbb6459e60ac17bc8daa8d18afb6b03390a7c7d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:40 addons-505336 dockerd[1340]: time="2024-08-29T19:08:40.678206455Z" level=info msg="ignoring event" container=cbf39dbbc3e00020b0d216ae34d16589bfd26b1c3e59cfacc4436a4e1224893d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:40 addons-505336 dockerd[1340]: time="2024-08-29T19:08:40.843018869Z" level=info msg="ignoring event" container=e86ec71a5418a3d61e4104b6d62ff3f5ce2bec00048783a569a055534a024527 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:40 addons-505336 dockerd[1340]: time="2024-08-29T19:08:40.870928211Z" level=info msg="ignoring event" container=be1b6ca1cd14613bd2c0c01a45b5cd2b39bf547603b6d6d31a16e6e6ca0d3daa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:40 addons-505336 dockerd[1340]: time="2024-08-29T19:08:40.909175690Z" level=info msg="ignoring event" container=c99392d19249b3fdfda3fd522076ada5590e39603bf648cc85888c77f02ca880 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:43 addons-505336 cri-dockerd[1604]: time="2024-08-29T19:08:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b6cac2822e1656a911e8c3e21fb604881b212b5ba18269ef32a9698a7a769ce2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Aug 29 19:08:43 addons-505336 dockerd[1340]: time="2024-08-29T19:08:43.456050570Z" level=info msg="ignoring event" container=3be282e754dce4c72b5c8fc8fbf24e3f57b5592079d82725dab1ac1d67fdd1b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:43 addons-505336 dockerd[1340]: time="2024-08-29T19:08:43.509070226Z" level=info msg="ignoring event" container=2c68e4220999dece766e0c681bad0306335371339185c9fca8dac4564838e509 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:43 addons-505336 cri-dockerd[1604]: time="2024-08-29T19:08:43Z" level=info msg="Stop pulling image docker.io/kicbase/echo-server:1.0: Status: Downloaded newer image for kicbase/echo-server:1.0"
	Aug 29 19:08:46 addons-505336 dockerd[1340]: time="2024-08-29T19:08:46.796245700Z" level=info msg="ignoring event" container=9bdb5bbbb6aa94c57a26384f85ad208053f7481fd4d529c4867a7712c33592da module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:46 addons-505336 dockerd[1340]: time="2024-08-29T19:08:46.801455636Z" level=info msg="ignoring event" container=347c39f6c0f9316409397b5abde46757f6d46601ee7506a4c13cb3fdd7fb5a1c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:46 addons-505336 dockerd[1340]: time="2024-08-29T19:08:46.970696219Z" level=info msg="ignoring event" container=1bb10db18b5cef864e5cfd4385d071b7a838ca03870e9c7b7687c3996deb49be module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:47 addons-505336 dockerd[1340]: time="2024-08-29T19:08:47.006772968Z" level=info msg="ignoring event" container=5f71bfe2caac52638fe7fa363a02b511aa1a89af359ae349a7e913ba28d18c19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:47 addons-505336 dockerd[1340]: time="2024-08-29T19:08:47.501903626Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=d20715f8b703e129e626c12417d5431a3d29c3c8c137f8e738ee0196296d6213
	Aug 29 19:08:47 addons-505336 dockerd[1340]: time="2024-08-29T19:08:47.551758741Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 19:08:47 addons-505336 dockerd[1340]: time="2024-08-29T19:08:47.554135419Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 19:08:47 addons-505336 dockerd[1340]: time="2024-08-29T19:08:47.562046442Z" level=info msg="ignoring event" container=d20715f8b703e129e626c12417d5431a3d29c3c8c137f8e738ee0196296d6213 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:08:47 addons-505336 dockerd[1340]: time="2024-08-29T19:08:47.703649810Z" level=info msg="ignoring event" container=d252d0d83d974e8d1134db309e9f35bdb0e60afb5e28eb9f04a162b5e99960a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:09:10 addons-505336 dockerd[1340]: time="2024-08-29T19:09:10.797478994Z" level=info msg="ignoring event" container=7bcb88fabc6055ae771d5d4bd718358591cd30a2c0ededbd87c7e82766baca11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:09:11 addons-505336 dockerd[1340]: time="2024-08-29T19:09:11.281907020Z" level=info msg="ignoring event" container=865ff0e7593f3851f753ee3aa7910b933c3ca4b6bc3dca72ca2e31db85e78d4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:09:11 addons-505336 dockerd[1340]: time="2024-08-29T19:09:11.339145398Z" level=info msg="ignoring event" container=af431d61ffaa5ef6d697a88b685cb8a38b48d1cc4c9c3ae116b8e2216ca2d52b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:09:11 addons-505336 dockerd[1340]: time="2024-08-29T19:09:11.424428617Z" level=info msg="ignoring event" container=e90700162db654620bc235af2997bd9cd70607716de0de328f186de5141107e6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 19:09:11 addons-505336 dockerd[1340]: time="2024-08-29T19:09:11.488527129Z" level=info msg="ignoring event" container=834bc0d091fda50323b41f51c01d66e2322bad272880836f2d024801c8acdf4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c4a39b8caeee8       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  29 seconds ago      Running             hello-world-app           0                   b6cac2822e165       hello-world-app-55bf9c44b4-g5xvx
	0ec7b927ad9f8       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                38 seconds ago      Running             nginx                     0                   32686093fa469       nginx
	a534c97fbb20a       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   1238df6a3d4b3       gcp-auth-89d5ffd79-sjw58
	26b46191b2a33       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   8c4a9e457298a       ingress-nginx-admission-patch-f2kwv
	f80176804ff34       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   406d636181981       ingress-nginx-admission-create-5m9dx
	e1e2d5e029a8f       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   8860b3543854a       local-path-provisioner-86d989889c-rhlfl
	a60f0c6816401       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   59e9a9ed7b7b9       storage-provisioner
	76912389427f4       cbb01a7bd410d                                                                                                                12 minutes ago      Running             coredns                   0                   865f01957f01a       coredns-6f6b679f8f-2c86p
	f9aa38df9c36d       ad83b2ca7b09e                                                                                                                12 minutes ago      Running             kube-proxy                0                   edd6c658a97bc       kube-proxy-kj5d4
	b64c23d78d662       604f5db92eaa8                                                                                                                12 minutes ago      Running             kube-apiserver            0                   37baa47785d61       kube-apiserver-addons-505336
	5d5970f46309a       045733566833c                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   4ec82008371f4       kube-controller-manager-addons-505336
	600effe6db079       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   8fc8101eb49c4       etcd-addons-505336
	11536d164fd70       1766f54c897f0                                                                                                                12 minutes ago      Running             kube-scheduler            0                   937c40dbc9cfc       kube-scheduler-addons-505336
	
	
	==> coredns [76912389427f] <==
	Trace[630911734]: [30.000350004s] [30.000350004s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[274992665]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (29-Aug-2024 18:56:38.388) (total time: 30000ms):
	Trace[274992665]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:57:08.389)
	Trace[274992665]: [30.000347993s] [30.000347993s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59405 - 21872 "HINFO IN 764184260452644925.6571257308608008398. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.009208319s
	[INFO] 10.244.0.26:52465 - 61767 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000405931s
	[INFO] 10.244.0.26:36476 - 65111 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000502619s
	[INFO] 10.244.0.26:55198 - 38137 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123193s
	[INFO] 10.244.0.26:38333 - 9663 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000152873s
	[INFO] 10.244.0.26:36202 - 39299 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119616s
	[INFO] 10.244.0.26:37698 - 13537 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000188924s
	[INFO] 10.244.0.26:56550 - 39640 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.00726122s
	[INFO] 10.244.0.26:60563 - 9991 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.008000464s
	[INFO] 10.244.0.26:37248 - 30073 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00582152s
	[INFO] 10.244.0.26:33710 - 38070 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005997874s
	[INFO] 10.244.0.26:53380 - 401 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004069723s
	[INFO] 10.244.0.26:40918 - 60520 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004557015s
	[INFO] 10.244.0.26:50915 - 14253 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000950356s
	[INFO] 10.244.0.26:56831 - 52255 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001094926s
	
	
	==> describe nodes <==
	Name:               addons-505336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-505336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5512bd76519cf55fa04aeca1cd01a1369e298033
	                    minikube.k8s.io/name=addons-505336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_56_30_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-505336
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:56:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-505336
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:09:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 19:09:05 +0000   Thu, 29 Aug 2024 18:56:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 19:09:05 +0000   Thu, 29 Aug 2024 18:56:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 19:09:05 +0000   Thu, 29 Aug 2024 18:56:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 19:09:05 +0000   Thu, 29 Aug 2024 18:56:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-505336
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 25840fc7878c4172be50473ab8518e0b
	  System UUID:                ad351877-c4e8-4d9a-8fdb-bd3c50cb3f4a
	  Boot ID:                    8d049dc3-d201-4992-9948-d4c3816a3020
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.0
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-g5xvx           0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  gcp-auth                    gcp-auth-89d5ffd79-sjw58                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-6f6b679f8f-2c86p                   100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-505336                         100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-505336               250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-505336      200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kj5d4                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-505336               100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-rhlfl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-505336 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-505336 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-505336 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-505336 event: Registered Node addons-505336 in Controller
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a 60 3b 9f ac e7 08 06
	[  +1.384095] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5a d8 23 28 c3 f4 08 06
	[  +1.301152] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e ed 65 be af e3 08 06
	[  +5.893875] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 43 8f a8 7f 7e 08 06
	[  +0.096252] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 c5 c2 03 e0 f5 08 06
	[  +0.076818] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 56 49 69 8b fb c1 08 06
	[ +22.440993] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff da 90 14 68 71 81 08 06
	[Aug29 18:58] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e a5 d9 41 7f 44 08 06
	[  +0.032870] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 75 87 14 46 84 08 06
	[Aug29 18:59] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 9c a6 72 9c fc 08 06
	[  +0.000574] IPv4: martian source 10.244.0.26 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 17 79 01 18 43 08 06
	[Aug29 19:08] IPv4: martian source 10.244.0.1 from 10.244.0.29, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 3e ce 45 e2 8e 3d 08 06
	[ +34.846395] IPv4: martian source 10.244.0.37 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 90 14 68 71 81 08 06
	
	
	==> etcd [600effe6db07] <==
	{"level":"info","ts":"2024-08-29T18:56:25.787598Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:56:25.787620Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:56:25.787900Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T18:56:25.787904Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:56:25.787925Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T18:56:25.787973Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:56:25.787995Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:56:25.788724Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:56:25.788761Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:56:25.789510Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-29T18:56:25.789932Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T18:56:46.991268Z","caller":"traceutil/trace.go:171","msg":"trace[2070930315] transaction","detail":"{read_only:false; response_revision:808; number_of_response:1; }","duration":"102.145389ms","start":"2024-08-29T18:56:46.889101Z","end":"2024-08-29T18:56:46.991246Z","steps":["trace[2070930315] 'process raft request'  (duration: 100.591885ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:56:46.992663Z","caller":"traceutil/trace.go:171","msg":"trace[746986554] transaction","detail":"{read_only:false; response_revision:809; number_of_response:1; }","duration":"101.504246ms","start":"2024-08-29T18:56:46.891140Z","end":"2024-08-29T18:56:46.992644Z","steps":["trace[746986554] 'process raft request'  (duration: 101.004918ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:56:46.992801Z","caller":"traceutil/trace.go:171","msg":"trace[206749501] transaction","detail":"{read_only:false; response_revision:810; number_of_response:1; }","duration":"101.532069ms","start":"2024-08-29T18:56:46.891259Z","end":"2024-08-29T18:56:46.992791Z","steps":["trace[206749501] 'process raft request'  (duration: 100.973944ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:56:46.992947Z","caller":"traceutil/trace.go:171","msg":"trace[1518821797] transaction","detail":"{read_only:false; response_revision:811; number_of_response:1; }","duration":"101.517405ms","start":"2024-08-29T18:56:46.891422Z","end":"2024-08-29T18:56:46.992939Z","steps":["trace[1518821797] 'process raft request'  (duration: 100.86305ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:15.272337Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.93534ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:15.272477Z","caller":"traceutil/trace.go:171","msg":"trace[295395045] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1070; }","duration":"160.101553ms","start":"2024-08-29T18:57:15.112358Z","end":"2024-08-29T18:57:15.272460Z","steps":["trace[295395045] 'range keys from in-memory index tree'  (duration: 159.867036ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:57:54.580717Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"179.087676ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031541713176987 > lease_revoke:<id:70cc919f7efc08dd>","response":"size:29"}
	{"level":"info","ts":"2024-08-29T18:57:54.580899Z","caller":"traceutil/trace.go:171","msg":"trace[1785589315] transaction","detail":"{read_only:false; response_revision:1270; number_of_response:1; }","duration":"174.509997ms","start":"2024-08-29T18:57:54.406379Z","end":"2024-08-29T18:57:54.580889Z","steps":["trace[1785589315] 'process raft request'  (duration: 174.416555ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:57:54.580918Z","caller":"traceutil/trace.go:171","msg":"trace[1988155826] linearizableReadLoop","detail":"{readStateIndex:1303; appliedIndex:1302; }","duration":"185.110299ms","start":"2024-08-29T18:57:54.395783Z","end":"2024-08-29T18:57:54.580893Z","steps":["trace[1988155826] 'read index received'  (duration: 5.695664ms)","trace[1988155826] 'applied index is now lower than readState.Index'  (duration: 179.411618ms)"],"step_count":2}
	{"level":"warn","ts":"2024-08-29T18:57:54.581149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"185.343194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-29T18:57:54.581196Z","caller":"traceutil/trace.go:171","msg":"trace[1796819428] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1270; }","duration":"185.406877ms","start":"2024-08-29T18:57:54.395778Z","end":"2024-08-29T18:57:54.581185Z","steps":["trace[1796819428] 'agreement among raft nodes before linearized reading'  (duration: 185.254384ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T19:06:25.825015Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1885}
	{"level":"info","ts":"2024-08-29T19:06:25.859286Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1885,"took":"33.722495ms","hash":3218043340,"current-db-size-bytes":8724480,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4857856,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-08-29T19:06:25.859334Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3218043340,"revision":1885,"compact-revision":-1}
	
	
	==> gcp-auth [a534c97fbb20] <==
	2024/08/29 18:59:58 Ready to write response ...
	2024/08/29 19:08:05 Ready to marshal response ...
	2024/08/29 19:08:05 Ready to write response ...
	2024/08/29 19:08:06 Ready to marshal response ...
	2024/08/29 19:08:06 Ready to write response ...
	2024/08/29 19:08:06 Ready to marshal response ...
	2024/08/29 19:08:06 Ready to write response ...
	2024/08/29 19:08:07 Ready to marshal response ...
	2024/08/29 19:08:07 Ready to write response ...
	2024/08/29 19:08:10 Ready to marshal response ...
	2024/08/29 19:08:10 Ready to write response ...
	2024/08/29 19:08:14 Ready to marshal response ...
	2024/08/29 19:08:14 Ready to write response ...
	2024/08/29 19:08:26 Ready to marshal response ...
	2024/08/29 19:08:26 Ready to write response ...
	2024/08/29 19:08:26 Ready to marshal response ...
	2024/08/29 19:08:26 Ready to write response ...
	2024/08/29 19:08:26 Ready to marshal response ...
	2024/08/29 19:08:26 Ready to write response ...
	2024/08/29 19:08:31 Ready to marshal response ...
	2024/08/29 19:08:31 Ready to write response ...
	2024/08/29 19:08:33 Ready to marshal response ...
	2024/08/29 19:08:33 Ready to write response ...
	2024/08/29 19:08:42 Ready to marshal response ...
	2024/08/29 19:08:42 Ready to write response ...
	
	
	==> kernel <==
	 19:09:12 up 21:51,  0 users,  load average: 0.88, 0.51, 0.35
	Linux addons-505336 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [b64c23d78d66] <==
	W0829 18:59:50.085001       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0829 18:59:50.481430       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0829 18:59:50.595730       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0829 18:59:50.994697       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0829 19:08:14.931399       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 19:08:15.916090       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0829 19:08:26.575615       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.99.91.187"}
	I0829 19:08:27.581443       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 19:08:28.600616       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 19:08:32.960035       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 19:08:33.123425       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.202.92"}
	I0829 19:08:42.692729       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.2.228"}
	I0829 19:08:46.662028       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:08:46.662082       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:08:46.674317       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:08:46.674423       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:08:46.675352       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:08:46.675385       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:08:46.685014       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:08:46.685063       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 19:08:46.695427       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 19:08:46.695475       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 19:08:47.676287       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 19:08:47.696566       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 19:08:47.704547       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [5d5970f46309] <==
	E0829 19:08:55.960195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:08:56.022136       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:08:56.022174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:08:59.343084       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:08:59.343123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:00.060971       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:00.061019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:00.967705       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:00.967747       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:01.342704       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:01.342743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:02.419575       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:02.419616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:09:04.799559       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0829 19:09:04.799592       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 19:09:04.910604       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0829 19:09:04.910650       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 19:09:05.286713       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-505336"
	W0829 19:09:06.254234       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:06.254273       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:08.004724       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:08.004771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 19:09:10.383697       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 19:09:10.383736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 19:09:11.221111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="9.35µs"
	
	
	==> kube-proxy [f9aa38df9c36] <==
	I0829 18:56:37.291945       1 server_linux.go:66] "Using iptables proxy"
	I0829 18:56:38.094182       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0829 18:56:38.094269       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:56:38.593375       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0829 18:56:38.593442       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:56:38.595874       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:56:38.596298       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:56:38.596314       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:56:38.598409       1 config.go:197] "Starting service config controller"
	I0829 18:56:38.598427       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:56:38.598449       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:56:38.598455       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:56:38.675671       1 config.go:326] "Starting node config controller"
	I0829 18:56:38.675690       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:56:38.775045       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:56:38.775043       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0829 18:56:38.776590       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [11536d164fd7] <==
	W0829 18:56:27.399473       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:56:27.399497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.216435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:56:28.216479       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.220757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:56:28.220792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.252070       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:28.252112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.259348       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:56:28.259377       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.300690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:28.300722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.360101       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:56:28.360148       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.374495       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 18:56:28.374546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.448389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:56:28.448441       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.530011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:56:28.530057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.548389       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:56:28.548432       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:56:28.549094       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 18:56:28.549123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0829 18:56:28.797115       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 19:08:49 addons-505336 kubelet[2438]: I0829 19:08:49.498098    2438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbc68ee-15f3-4427-a51a-61672bef2410" path="/var/lib/kubelet/pods/9fbc68ee-15f3-4427-a51a-61672bef2410/volumes"
	Aug 29 19:08:51 addons-505336 kubelet[2438]: E0829 19:08:51.489732    2438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f9b774b1-c20c-4cf3-814a-6d65d22b20d7"
	Aug 29 19:09:01 addons-505336 kubelet[2438]: E0829 19:09:01.489403    2438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="ce9ebc43-07ee-4c49-bbf9-3a21a2b71153"
	Aug 29 19:09:05 addons-505336 kubelet[2438]: E0829 19:09:05.489648    2438 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f9b774b1-c20c-4cf3-814a-6d65d22b20d7"
	Aug 29 19:09:10 addons-505336 kubelet[2438]: I0829 19:09:10.966110    2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ce9ebc43-07ee-4c49-bbf9-3a21a2b71153-gcp-creds\") pod \"ce9ebc43-07ee-4c49-bbf9-3a21a2b71153\" (UID: \"ce9ebc43-07ee-4c49-bbf9-3a21a2b71153\") "
	Aug 29 19:09:10 addons-505336 kubelet[2438]: I0829 19:09:10.966173    2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk8rs\" (UniqueName: \"kubernetes.io/projected/ce9ebc43-07ee-4c49-bbf9-3a21a2b71153-kube-api-access-vk8rs\") pod \"ce9ebc43-07ee-4c49-bbf9-3a21a2b71153\" (UID: \"ce9ebc43-07ee-4c49-bbf9-3a21a2b71153\") "
	Aug 29 19:09:10 addons-505336 kubelet[2438]: I0829 19:09:10.966236    2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce9ebc43-07ee-4c49-bbf9-3a21a2b71153-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "ce9ebc43-07ee-4c49-bbf9-3a21a2b71153" (UID: "ce9ebc43-07ee-4c49-bbf9-3a21a2b71153"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 19:09:10 addons-505336 kubelet[2438]: I0829 19:09:10.968302    2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce9ebc43-07ee-4c49-bbf9-3a21a2b71153-kube-api-access-vk8rs" (OuterVolumeSpecName: "kube-api-access-vk8rs") pod "ce9ebc43-07ee-4c49-bbf9-3a21a2b71153" (UID: "ce9ebc43-07ee-4c49-bbf9-3a21a2b71153"). InnerVolumeSpecName "kube-api-access-vk8rs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:09:11 addons-505336 kubelet[2438]: I0829 19:09:11.066744    2438 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vk8rs\" (UniqueName: \"kubernetes.io/projected/ce9ebc43-07ee-4c49-bbf9-3a21a2b71153-kube-api-access-vk8rs\") on node \"addons-505336\" DevicePath \"\""
	Aug 29 19:09:11 addons-505336 kubelet[2438]: I0829 19:09:11.066804    2438 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ce9ebc43-07ee-4c49-bbf9-3a21a2b71153-gcp-creds\") on node \"addons-505336\" DevicePath \"\""
	Aug 29 19:09:11 addons-505336 kubelet[2438]: I0829 19:09:11.495230    2438 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce9ebc43-07ee-4c49-bbf9-3a21a2b71153" path="/var/lib/kubelet/pods/ce9ebc43-07ee-4c49-bbf9-3a21a2b71153/volumes"
	Aug 29 19:09:11 addons-505336 kubelet[2438]: I0829 19:09:11.578054    2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs5bp\" (UniqueName: \"kubernetes.io/projected/0bc8f454-eced-450f-ab5d-b961648307b9-kube-api-access-gs5bp\") pod \"0bc8f454-eced-450f-ab5d-b961648307b9\" (UID: \"0bc8f454-eced-450f-ab5d-b961648307b9\") "
	Aug 29 19:09:11 addons-505336 kubelet[2438]: I0829 19:09:11.578098    2438 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp8fx\" (UniqueName: \"kubernetes.io/projected/ef74c371-cbb9-4c81-91dd-dcbc748f81d0-kube-api-access-xp8fx\") pod \"ef74c371-cbb9-4c81-91dd-dcbc748f81d0\" (UID: \"ef74c371-cbb9-4c81-91dd-dcbc748f81d0\") "
	Aug 29 19:09:11 addons-505336 kubelet[2438]: I0829 19:09:11.579868    2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef74c371-cbb9-4c81-91dd-dcbc748f81d0-kube-api-access-xp8fx" (OuterVolumeSpecName: "kube-api-access-xp8fx") pod "ef74c371-cbb9-4c81-91dd-dcbc748f81d0" (UID: "ef74c371-cbb9-4c81-91dd-dcbc748f81d0"). InnerVolumeSpecName "kube-api-access-xp8fx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:09:11 addons-505336 kubelet[2438]: I0829 19:09:11.579870    2438 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bc8f454-eced-450f-ab5d-b961648307b9-kube-api-access-gs5bp" (OuterVolumeSpecName: "kube-api-access-gs5bp") pod "0bc8f454-eced-450f-ab5d-b961648307b9" (UID: "0bc8f454-eced-450f-ab5d-b961648307b9"). InnerVolumeSpecName "kube-api-access-gs5bp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 19:09:11 addons-505336 kubelet[2438]: I0829 19:09:11.679475    2438 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gs5bp\" (UniqueName: \"kubernetes.io/projected/0bc8f454-eced-450f-ab5d-b961648307b9-kube-api-access-gs5bp\") on node \"addons-505336\" DevicePath \"\""
	Aug 29 19:09:11 addons-505336 kubelet[2438]: I0829 19:09:11.679516    2438 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xp8fx\" (UniqueName: \"kubernetes.io/projected/ef74c371-cbb9-4c81-91dd-dcbc748f81d0-kube-api-access-xp8fx\") on node \"addons-505336\" DevicePath \"\""
	Aug 29 19:09:12 addons-505336 kubelet[2438]: I0829 19:09:12.052768    2438 scope.go:117] "RemoveContainer" containerID="af431d61ffaa5ef6d697a88b685cb8a38b48d1cc4c9c3ae116b8e2216ca2d52b"
	Aug 29 19:09:12 addons-505336 kubelet[2438]: I0829 19:09:12.069149    2438 scope.go:117] "RemoveContainer" containerID="af431d61ffaa5ef6d697a88b685cb8a38b48d1cc4c9c3ae116b8e2216ca2d52b"
	Aug 29 19:09:12 addons-505336 kubelet[2438]: E0829 19:09:12.071321    2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: af431d61ffaa5ef6d697a88b685cb8a38b48d1cc4c9c3ae116b8e2216ca2d52b" containerID="af431d61ffaa5ef6d697a88b685cb8a38b48d1cc4c9c3ae116b8e2216ca2d52b"
	Aug 29 19:09:12 addons-505336 kubelet[2438]: I0829 19:09:12.071360    2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"af431d61ffaa5ef6d697a88b685cb8a38b48d1cc4c9c3ae116b8e2216ca2d52b"} err="failed to get container status \"af431d61ffaa5ef6d697a88b685cb8a38b48d1cc4c9c3ae116b8e2216ca2d52b\": rpc error: code = Unknown desc = Error response from daemon: No such container: af431d61ffaa5ef6d697a88b685cb8a38b48d1cc4c9c3ae116b8e2216ca2d52b"
	Aug 29 19:09:12 addons-505336 kubelet[2438]: I0829 19:09:12.071378    2438 scope.go:117] "RemoveContainer" containerID="865ff0e7593f3851f753ee3aa7910b933c3ca4b6bc3dca72ca2e31db85e78d4e"
	Aug 29 19:09:12 addons-505336 kubelet[2438]: I0829 19:09:12.086555    2438 scope.go:117] "RemoveContainer" containerID="865ff0e7593f3851f753ee3aa7910b933c3ca4b6bc3dca72ca2e31db85e78d4e"
	Aug 29 19:09:12 addons-505336 kubelet[2438]: E0829 19:09:12.087325    2438 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 865ff0e7593f3851f753ee3aa7910b933c3ca4b6bc3dca72ca2e31db85e78d4e" containerID="865ff0e7593f3851f753ee3aa7910b933c3ca4b6bc3dca72ca2e31db85e78d4e"
	Aug 29 19:09:12 addons-505336 kubelet[2438]: I0829 19:09:12.087365    2438 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"865ff0e7593f3851f753ee3aa7910b933c3ca4b6bc3dca72ca2e31db85e78d4e"} err="failed to get container status \"865ff0e7593f3851f753ee3aa7910b933c3ca4b6bc3dca72ca2e31db85e78d4e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 865ff0e7593f3851f753ee3aa7910b933c3ca4b6bc3dca72ca2e31db85e78d4e"
	
	
	==> storage-provisioner [a60f0c681640] <==
	I0829 18:56:42.281135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:56:42.386247       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:56:42.386296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:56:42.484026       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:56:42.484376       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-505336_60e9b849-e378-4aed-b33d-e0e90a4d1d41!
	I0829 18:56:42.486002       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac97d5f2-4185-4dd9-b5ca-de8119bbf3dc", APIVersion:"v1", ResourceVersion:"552", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-505336_60e9b849-e378-4aed-b33d-e0e90a4d1d41 became leader
	I0829 18:56:42.585378       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-505336_60e9b849-e378-4aed-b33d-e0e90a4d1d41!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-505336 -n addons-505336
helpers_test.go:261: (dbg) Run:  kubectl --context addons-505336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-505336 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-505336 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-505336/192.168.49.2
	Start Time:       Thu, 29 Aug 2024 18:59:58 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zw2tc (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-zw2tc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to addons-505336
	  Normal   Pulling    7m43s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m43s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m43s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m31s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m9s (x20 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.39s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.96
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.18
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 3.82
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.19
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.96
21 TestBinaryMirror 0.72
22 TestOffline 72.23
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 211.25
29 TestAddons/serial/Volcano 37.47
31 TestAddons/serial/GCPAuth/Namespaces 0.12
34 TestAddons/parallel/Ingress 18.87
35 TestAddons/parallel/InspektorGadget 11.71
36 TestAddons/parallel/MetricsServer 5.63
37 TestAddons/parallel/HelmTiller 8.77
39 TestAddons/parallel/CSI 46.4
40 TestAddons/parallel/Headlamp 17.34
41 TestAddons/parallel/CloudSpanner 5.39
42 TestAddons/parallel/LocalPath 8.99
43 TestAddons/parallel/NvidiaDevicePlugin 5.39
44 TestAddons/parallel/Yakd 11.69
45 TestAddons/StoppedEnableDisable 11.03
46 TestCertOptions 29.74
47 TestCertExpiration 245.93
48 TestDockerFlags 27.24
49 TestForceSystemdFlag 37.94
50 TestForceSystemdEnv 25.13
52 TestKVMDriverInstallOrUpdate 1.26
56 TestErrorSpam/setup 23.65
57 TestErrorSpam/start 0.53
58 TestErrorSpam/status 0.8
59 TestErrorSpam/pause 1.09
60 TestErrorSpam/unpause 1.27
61 TestErrorSpam/stop 1.87
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 65.46
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 34.23
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.18
73 TestFunctional/serial/CacheCmd/cache/add_local 0.66
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.14
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 39.12
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 0.93
84 TestFunctional/serial/LogsFileCmd 0.94
85 TestFunctional/serial/InvalidService 3.99
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 9.59
89 TestFunctional/parallel/DryRun 0.41
90 TestFunctional/parallel/InternationalLanguage 0.16
91 TestFunctional/parallel/StatusCmd 0.88
95 TestFunctional/parallel/ServiceCmdConnect 18.49
96 TestFunctional/parallel/AddonsCmd 0.19
97 TestFunctional/parallel/PersistentVolumeClaim 40.64
99 TestFunctional/parallel/SSHCmd 0.55
100 TestFunctional/parallel/CpCmd 1.44
101 TestFunctional/parallel/MySQL 25.2
102 TestFunctional/parallel/FileSync 0.27
103 TestFunctional/parallel/CertSync 1.64
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
111 TestFunctional/parallel/License 0.2
112 TestFunctional/parallel/Version/short 0.06
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
114 TestFunctional/parallel/Version/components 1.02
115 TestFunctional/parallel/DockerEnv/bash 0.98
116 TestFunctional/parallel/ProfileCmd/profile_list 0.38
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.35
122 TestFunctional/parallel/ImageCommands/Setup 0.57
123 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.03
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.46
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.22
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.98
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
136 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
137 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
138 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
145 TestFunctional/parallel/MountCmd/any-port 14.52
146 TestFunctional/parallel/MountCmd/specific-port 1.89
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.56
148 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
149 TestFunctional/parallel/ServiceCmd/List 1.66
150 TestFunctional/parallel/ServiceCmd/JSONOutput 1.66
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
152 TestFunctional/parallel/ServiceCmd/Format 0.49
153 TestFunctional/parallel/ServiceCmd/URL 0.49
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 106.55
161 TestMultiControlPlane/serial/DeployApp 5.88
162 TestMultiControlPlane/serial/PingHostFromPods 1.06
163 TestMultiControlPlane/serial/AddWorkerNode 20.7
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.6
166 TestMultiControlPlane/serial/CopyFile 14.94
167 TestMultiControlPlane/serial/StopSecondaryNode 11.35
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.46
169 TestMultiControlPlane/serial/RestartSecondaryNode 19.52
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.45
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 232.42
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.37
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.46
174 TestMultiControlPlane/serial/StopCluster 32.35
175 TestMultiControlPlane/serial/RestartCluster 92.27
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
177 TestMultiControlPlane/serial/AddSecondaryNode 39.62
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.62
181 TestImageBuild/serial/Setup 24.02
182 TestImageBuild/serial/NormalBuild 1.22
183 TestImageBuild/serial/BuildWithBuildArg 0.74
184 TestImageBuild/serial/BuildWithDockerIgnore 0.56
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.55
189 TestJSONOutput/start/Command 37.33
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.53
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.42
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.88
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.2
214 TestKicCustomNetwork/create_custom_network 22.99
215 TestKicCustomNetwork/use_default_bridge_network 26.17
216 TestKicExistingNetwork 25.24
217 TestKicCustomSubnet 23.21
218 TestKicStaticIP 27.21
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 47.97
223 TestMountStart/serial/StartWithMountFirst 9.55
224 TestMountStart/serial/VerifyMountFirst 0.23
225 TestMountStart/serial/StartWithMountSecond 9.28
226 TestMountStart/serial/VerifyMountSecond 0.23
227 TestMountStart/serial/DeleteFirst 1.43
228 TestMountStart/serial/VerifyMountPostDelete 0.23
229 TestMountStart/serial/Stop 1.17
230 TestMountStart/serial/RestartStopped 7.73
231 TestMountStart/serial/VerifyMountPostStop 0.23
234 TestMultiNode/serial/FreshStart2Nodes 57.5
235 TestMultiNode/serial/DeployApp2Nodes 40.32
236 TestMultiNode/serial/PingHostFrom2Pods 0.71
237 TestMultiNode/serial/AddNode 17.89
238 TestMultiNode/serial/MultiNodeLabels 0.06
239 TestMultiNode/serial/ProfileList 0.28
240 TestMultiNode/serial/CopyFile 8.78
241 TestMultiNode/serial/StopNode 2.1
242 TestMultiNode/serial/StartAfterStop 9.73
243 TestMultiNode/serial/RestartKeepsNodes 98.69
244 TestMultiNode/serial/DeleteNode 5.14
245 TestMultiNode/serial/StopMultiNode 21.36
246 TestMultiNode/serial/RestartMultiNode 57.85
247 TestMultiNode/serial/ValidateNameConflict 24.72
252 TestPreload 85.62
254 TestScheduledStopUnix 94.55
255 TestSkaffold 95.5
257 TestInsufficientStorage 12.43
258 TestRunningBinaryUpgrade 60.9
260 TestKubernetesUpgrade 341.41
261 TestMissingContainerUpgrade 137.49
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
264 TestNoKubernetes/serial/StartWithK8s 34.89
265 TestNoKubernetes/serial/StartWithStopK8s 16.39
277 TestNoKubernetes/serial/Start 8.93
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
279 TestNoKubernetes/serial/ProfileList 3.15
280 TestNoKubernetes/serial/Stop 1.19
281 TestNoKubernetes/serial/StartNoArgs 7.48
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
283 TestStoppedBinaryUpgrade/Setup 0.39
284 TestStoppedBinaryUpgrade/Upgrade 124.73
285 TestStoppedBinaryUpgrade/MinikubeLogs 1.16
294 TestPause/serial/Start 37.85
295 TestNetworkPlugins/group/auto/Start 71.86
296 TestNetworkPlugins/group/kindnet/Start 58.55
297 TestPause/serial/SecondStartNoReconfiguration 34.63
298 TestPause/serial/Pause 0.53
299 TestPause/serial/VerifyStatus 0.27
300 TestPause/serial/Unpause 0.42
301 TestPause/serial/PauseAgain 0.6
302 TestPause/serial/DeletePaused 2.1
303 TestPause/serial/VerifyDeletedResources 0.64
304 TestNetworkPlugins/group/calico/Start 55.09
305 TestNetworkPlugins/group/auto/KubeletFlags 0.31
306 TestNetworkPlugins/group/auto/NetCatPod 10.33
307 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
308 TestNetworkPlugins/group/auto/DNS 0.14
309 TestNetworkPlugins/group/auto/Localhost 0.11
310 TestNetworkPlugins/group/auto/HairPin 0.11
311 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
312 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
313 TestNetworkPlugins/group/kindnet/DNS 0.15
314 TestNetworkPlugins/group/kindnet/Localhost 0.16
315 TestNetworkPlugins/group/kindnet/HairPin 0.13
316 TestNetworkPlugins/group/custom-flannel/Start 47.24
317 TestNetworkPlugins/group/false/Start 64.52
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.31
320 TestNetworkPlugins/group/calico/NetCatPod 10.25
321 TestNetworkPlugins/group/calico/DNS 0.15
322 TestNetworkPlugins/group/calico/Localhost 0.12
323 TestNetworkPlugins/group/calico/HairPin 0.13
324 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
325 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.19
326 TestNetworkPlugins/group/enable-default-cni/Start 67.56
327 TestNetworkPlugins/group/custom-flannel/DNS 0.14
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
330 TestNetworkPlugins/group/flannel/Start 49.01
331 TestNetworkPlugins/group/false/KubeletFlags 0.32
332 TestNetworkPlugins/group/false/NetCatPod 11.25
333 TestNetworkPlugins/group/false/DNS 0.16
334 TestNetworkPlugins/group/false/Localhost 0.14
335 TestNetworkPlugins/group/false/HairPin 0.13
336 TestNetworkPlugins/group/bridge/Start 40.15
337 TestNetworkPlugins/group/kubenet/Start 32.07
338 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
339 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
340 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
342 TestNetworkPlugins/group/flannel/NetCatPod 10.2
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
344 TestNetworkPlugins/group/bridge/NetCatPod 11.18
345 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
346 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
347 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
348 TestNetworkPlugins/group/kubenet/KubeletFlags 0.27
349 TestNetworkPlugins/group/kubenet/NetCatPod 9.19
350 TestNetworkPlugins/group/flannel/DNS 0.17
351 TestNetworkPlugins/group/flannel/Localhost 0.15
352 TestNetworkPlugins/group/flannel/HairPin 0.14
353 TestNetworkPlugins/group/bridge/DNS 0.14
354 TestNetworkPlugins/group/bridge/Localhost 0.11
355 TestNetworkPlugins/group/bridge/HairPin 0.12
356 TestNetworkPlugins/group/kubenet/DNS 0.22
357 TestNetworkPlugins/group/kubenet/Localhost 0.12
358 TestNetworkPlugins/group/kubenet/HairPin 0.12
360 TestStartStop/group/old-k8s-version/serial/FirstStart 162.5
362 TestStartStop/group/no-preload/serial/FirstStart 47.6
364 TestStartStop/group/embed-certs/serial/FirstStart 74.22
366 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 73.13
367 TestStartStop/group/no-preload/serial/DeployApp 8.23
368 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.82
369 TestStartStop/group/no-preload/serial/Stop 10.74
370 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
371 TestStartStop/group/no-preload/serial/SecondStart 300.16
372 TestStartStop/group/embed-certs/serial/DeployApp 9.27
373 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
374 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.99
375 TestStartStop/group/embed-certs/serial/Stop 10.82
376 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.78
377 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.95
378 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
379 TestStartStop/group/embed-certs/serial/SecondStart 262.68
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.41
382 TestStartStop/group/old-k8s-version/serial/DeployApp 8.37
383 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.76
384 TestStartStop/group/old-k8s-version/serial/Stop 10.76
385 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
386 TestStartStop/group/old-k8s-version/serial/SecondStart 24.27
387 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 26.01
388 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
389 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
390 TestStartStop/group/old-k8s-version/serial/Pause 2.32
392 TestStartStop/group/newest-cni/serial/FirstStart 31.26
393 TestStartStop/group/newest-cni/serial/DeployApp 0
394 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
395 TestStartStop/group/newest-cni/serial/Stop 10.92
396 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
397 TestStartStop/group/newest-cni/serial/SecondStart 14.6
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
401 TestStartStop/group/newest-cni/serial/Pause 2.4
402 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
403 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
404 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
405 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
406 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
407 TestStartStop/group/embed-certs/serial/Pause 2.22
408 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
409 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
410 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
411 TestStartStop/group/no-preload/serial/Pause 2.39
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.47
x
+
TestDownloadOnly/v1.20.0/json-events (9.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-056279 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-056279 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (9.956038513s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-056279
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-056279: exit status 85 (57.244502ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-056279 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |          |
	|         | -p download-only-056279        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:55:32
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:55:32.998296  425508 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:55:32.998445  425508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:32.998456  425508 out.go:358] Setting ErrFile to fd 2...
	I0829 18:55:32.998462  425508 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:32.998670  425508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
	W0829 18:55:32.998834  425508 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19530-418716/.minikube/config/config.json: open /home/jenkins/minikube-integration/19530-418716/.minikube/config/config.json: no such file or directory
	I0829 18:55:32.999442  425508 out.go:352] Setting JSON to true
	I0829 18:55:33.000387  425508 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":77879,"bootTime":1724879854,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:55:33.000452  425508 start.go:139] virtualization: kvm guest
	I0829 18:55:33.003044  425508 out.go:97] [download-only-056279] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:55:33.003180  425508 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19530-418716/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:55:33.003231  425508 notify.go:220] Checking for updates...
	I0829 18:55:33.004668  425508 out.go:169] MINIKUBE_LOCATION=19530
	I0829 18:55:33.006025  425508 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:55:33.007409  425508 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	I0829 18:55:33.008603  425508 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	I0829 18:55:33.009746  425508 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:55:33.011920  425508 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:55:33.012150  425508 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:55:33.034901  425508 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:55:33.035064  425508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:55:33.080871  425508 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:55:33.072368751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:55:33.080978  425508 docker.go:307] overlay module found
	I0829 18:55:33.082929  425508 out.go:97] Using the docker driver based on user configuration
	I0829 18:55:33.082961  425508 start.go:297] selected driver: docker
	I0829 18:55:33.082968  425508 start.go:901] validating driver "docker" against <nil>
	I0829 18:55:33.083059  425508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:55:33.125575  425508 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:55:33.116845671 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:55:33.125798  425508 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:55:33.126346  425508 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0829 18:55:33.126517  425508 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:55:33.128518  425508 out.go:169] Using Docker driver with root privileges
	I0829 18:55:33.129697  425508 cni.go:84] Creating CNI manager for ""
	I0829 18:55:33.129723  425508 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0829 18:55:33.129803  425508 start.go:340] cluster config:
	{Name:download-only-056279 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-056279 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:55:33.131154  425508 out.go:97] Starting "download-only-056279" primary control-plane node in "download-only-056279" cluster
	I0829 18:55:33.131180  425508 cache.go:121] Beginning downloading kic base image for docker with docker
	I0829 18:55:33.132405  425508 out.go:97] Pulling base image v0.0.44-1724862063-19530 ...
	I0829 18:55:33.132440  425508 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:55:33.132547  425508 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local docker daemon
	I0829 18:55:33.147763  425508 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0829 18:55:33.147936  425508 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 in local cache directory
	I0829 18:55:33.148072  425508 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 to local cache
	I0829 18:55:33.151953  425508 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0829 18:55:33.151972  425508 cache.go:56] Caching tarball of preloaded images
	I0829 18:55:33.152084  425508 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:55:33.153983  425508 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0829 18:55:33.154006  425508 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:55:33.180608  425508 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19530-418716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0829 18:55:35.716867  425508 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:55:35.716969  425508 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19530-418716/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:55:36.464192  425508 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0829 18:55:36.464588  425508 profile.go:143] Saving config to /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/download-only-056279/config.json ...
	I0829 18:55:36.464646  425508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/download-only-056279/config.json: {Name:mkbb259ebdbce7868c9c6c77ac2ccc1d1df3cef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:55:36.464857  425508 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:55:36.465034  425508 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19530-418716/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-056279 host does not exist
	  To start a cluster, run: "minikube start -p download-only-056279"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-056279
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (3.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-564308 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-564308 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.821293856s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (3.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-564308
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-564308: exit status 85 (56.655337ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-056279 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | -p download-only-056279        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| delete  | -p download-only-056279        | download-only-056279 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC | 29 Aug 24 18:55 UTC |
	| start   | -o=json --download-only        | download-only-564308 | jenkins | v1.33.1 | 29 Aug 24 18:55 UTC |                     |
	|         | -p download-only-564308        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:55:43
	Running on machine: ubuntu-20-agent-14
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:55:43.317255  425871 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:55:43.317540  425871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:43.317552  425871 out.go:358] Setting ErrFile to fd 2...
	I0829 18:55:43.317559  425871 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:55:43.317779  425871 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
	I0829 18:55:43.318400  425871 out.go:352] Setting JSON to true
	I0829 18:55:43.319360  425871 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":77889,"bootTime":1724879854,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:55:43.319425  425871 start.go:139] virtualization: kvm guest
	I0829 18:55:43.321677  425871 out.go:97] [download-only-564308] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:55:43.321835  425871 notify.go:220] Checking for updates...
	I0829 18:55:43.323262  425871 out.go:169] MINIKUBE_LOCATION=19530
	I0829 18:55:43.324692  425871 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:55:43.326093  425871 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	I0829 18:55:43.327353  425871 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	I0829 18:55:43.328676  425871 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:55:43.331434  425871 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:55:43.331674  425871 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:55:43.352230  425871 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:55:43.352307  425871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:55:43.398842  425871 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:55:43.390148175 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:55:43.398955  425871 docker.go:307] overlay module found
	I0829 18:55:43.400621  425871 out.go:97] Using the docker driver based on user configuration
	I0829 18:55:43.400642  425871 start.go:297] selected driver: docker
	I0829 18:55:43.400647  425871 start.go:901] validating driver "docker" against <nil>
	I0829 18:55:43.400719  425871 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:55:43.442779  425871 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:55:43.434287807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:55:43.442977  425871 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:55:43.443475  425871 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0829 18:55:43.443635  425871 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:55:43.445686  425871 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-564308 host does not exist
	  To start a cluster, run: "minikube start -p download-only-564308"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-564308
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.96s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-211674 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-211674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-211674
--- PASS: TestDownloadOnlyKic (0.96s)

                                                
                                    
x
+
TestBinaryMirror (0.72s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-967186 --alsologtostderr --binary-mirror http://127.0.0.1:42019 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-967186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-967186
--- PASS: TestBinaryMirror (0.72s)

                                                
                                    
x
+
TestOffline (72.23s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-447038 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-447038 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m8.438606002s)
helpers_test.go:175: Cleaning up "offline-docker-447038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-447038
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-447038: (3.792006027s)
--- PASS: TestOffline (72.23s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-505336
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-505336: exit status 85 (48.313218ms)

                                                
                                                
-- stdout --
	* Profile "addons-505336" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-505336"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-505336
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-505336: exit status 85 (47.136561ms)

                                                
                                                
-- stdout --
	* Profile "addons-505336" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-505336"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (211.25s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-505336 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-505336 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m31.245276021s)
--- PASS: TestAddons/Setup (211.25s)

                                                
                                    
x
+
TestAddons/serial/Volcano (37.47s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 10.205363ms
addons_test.go:905: volcano-admission stabilized in 10.235721ms
addons_test.go:913: volcano-controller stabilized in 10.258666ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-29w78" [574e90d2-4a00-4bdb-9886-07ab12c6abc3] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0038955s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-dmnlr" [a811aa28-40e6-4153-b40b-ced1f9025eec] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003826857s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-c92pg" [0b8aa3a1-efac-4a17-a67c-0344f5b17ae4] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003723109s
addons_test.go:932: (dbg) Run:  kubectl --context addons-505336 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-505336 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-505336 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [df92866e-95d6-4b40-b693-6d066cecfd09] Pending
helpers_test.go:344: "test-job-nginx-0" [df92866e-95d6-4b40-b693-6d066cecfd09] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [df92866e-95d6-4b40-b693-6d066cecfd09] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.003876937s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-505336 addons disable volcano --alsologtostderr -v=1: (10.169286129s)
--- PASS: TestAddons/serial/Volcano (37.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-505336 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-505336 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-505336 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-505336 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-505336 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2699debb-44f0-4914-9d25-338461bbe78d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2699debb-44f0-4914-9d25-338461bbe78d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003599623s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-505336 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-505336 addons disable ingress-dns --alsologtostderr -v=1: (1.031342826s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-505336 addons disable ingress --alsologtostderr -v=1: (7.554125673s)
--- PASS: TestAddons/parallel/Ingress (18.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tgb79" [b69030e3-0b08-479d-86f8-84820b580bb1] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003786748s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-505336
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-505336: (5.70044681s)
--- PASS: TestAddons/parallel/InspektorGadget (11.71s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.422429ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-brsc8" [edb7fd7c-feae-493c-8e00-a0629ed0235b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003287368s
addons_test.go:417: (dbg) Run:  kubectl --context addons-505336 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (8.77s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.326668ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-7x5zs" [32e00d4f-af90-4a61-9f18-048d53bb045d] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003262804s
addons_test.go:475: (dbg) Run:  kubectl --context addons-505336 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-505336 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.351438172s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (8.77s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.299411ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-505336 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-505336 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [eca2e228-ceb1-453b-8170-f36d3d2c5978] Pending
helpers_test.go:344: "task-pv-pod" [eca2e228-ceb1-453b-8170-f36d3d2c5978] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003736969s
addons_test.go:590: (dbg) Run:  kubectl --context addons-505336 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-505336 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-505336 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-505336 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-505336 delete pod task-pv-pod: (1.008771579s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-505336 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-505336 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-505336 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [070e52ea-5ace-4e56-8d57-906beef9bb7b] Pending
helpers_test.go:344: "task-pv-pod-restore" [070e52ea-5ace-4e56-8d57-906beef9bb7b] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004269164s
addons_test.go:632: (dbg) Run:  kubectl --context addons-505336 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-505336 delete pod task-pv-pod-restore: (1.029538038s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-505336 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-505336 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-505336 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.483142055s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (46.40s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-505336 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-tcbfh" [ccdc8d4b-efc8-4f28-b01d-4fdd0e57cbce] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-tcbfh" [ccdc8d4b-efc8-4f28-b01d-4fdd0e57cbce] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003471106s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-505336 addons disable headlamp --alsologtostderr -v=1: (5.674177927s)
--- PASS: TestAddons/parallel/Headlamp (17.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-zpp9h" [7121288e-1763-4304-a412-6f0f020fe6da] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003395s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-505336
--- PASS: TestAddons/parallel/CloudSpanner (5.39s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.99s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-505336 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-505336 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-505336 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7d705732-267e-48a0-84e0-7c486a0ba536] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7d705732-267e-48a0-84e0-7c486a0ba536] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7d705732-267e-48a0-84e0-7c486a0ba536] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003328022s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-505336 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 ssh "cat /opt/local-path-provisioner/pvc-e7cb1702-6246-4f1e-af32-73da81c1bbe3_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-505336 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-505336 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.99s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vjlkn" [56fd832b-ce49-4077-95a5-fc21a8abdc0d] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004215973s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-505336
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.39s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-cx2xz" [69e61b74-bfca-4f26-871e-4fcaca06d094] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004388393s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-505336 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-505336 addons disable yakd --alsologtostderr -v=1: (5.684006413s)
--- PASS: TestAddons/parallel/Yakd (11.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.03s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-505336
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-505336: (10.795720987s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-505336
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-505336
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-505336
--- PASS: TestAddons/StoppedEnableDisable (11.03s)

                                                
                                    
x
+
TestCertOptions (29.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-718653 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-718653 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (25.928798943s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-718653 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-718653 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-718653 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-718653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-718653
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-718653: (3.11952128s)
--- PASS: TestCertOptions (29.74s)

                                                
                                    
x
+
TestCertExpiration (245.93s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-566314 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-566314 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (30.019058755s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-566314 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-566314 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (31.939285598s)
helpers_test.go:175: Cleaning up "cert-expiration-566314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-566314
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-566314: (3.967956656s)
--- PASS: TestCertExpiration (245.93s)

                                                
                                    
x
+
TestDockerFlags (27.24s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-400832 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-400832 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (23.685875897s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-400832 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-400832 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-400832" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-400832
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-400832: (2.885398587s)
--- PASS: TestDockerFlags (27.24s)

                                                
                                    
x
+
TestForceSystemdFlag (37.94s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-491982 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-491982 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.624486038s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-491982 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-491982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-491982
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-491982: (2.019225894s)
--- PASS: TestForceSystemdFlag (37.94s)

                                                
                                    
x
+
TestForceSystemdEnv (25.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-010688 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0829 19:39:20.688011  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-010688 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (22.765850925s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-010688 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-010688" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-010688
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-010688: (2.043273068s)
--- PASS: TestForceSystemdEnv (25.13s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.26s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.26s)

                                                
                                    
x
+
TestErrorSpam/setup (23.65s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-875837 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-875837 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-875837 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-875837 --driver=docker  --container-runtime=docker: (23.650851648s)
--- PASS: TestErrorSpam/setup (23.65s)

                                                
                                    
x
+
TestErrorSpam/start (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 start --dry-run
--- PASS: TestErrorSpam/start (0.53s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 pause
--- PASS: TestErrorSpam/pause (1.09s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 unpause
--- PASS: TestErrorSpam/unpause (1.27s)

                                                
                                    
x
+
TestErrorSpam/stop (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 stop: (1.70717803s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-875837 --log_dir /tmp/nospam-875837 stop
--- PASS: TestErrorSpam/stop (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19530-418716/.minikube/files/etc/test/nested/copy/425496/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.46s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036671 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-036671 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m5.45771772s)
--- PASS: TestFunctional/serial/StartWithProxy (65.46s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.23s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036671 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-036671 --alsologtostderr -v=8: (34.233047748s)
functional_test.go:663: soft start took 34.233970788s for "functional-036671" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.23s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-036671 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-036671 /tmp/TestFunctionalserialCacheCmdcacheadd_local148495315/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 cache add minikube-local-cache-test:functional-036671
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 cache delete minikube-local-cache-test:functional-036671
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-036671
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036671 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (247.507315ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 kubectl -- --context functional-036671 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-036671 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036671 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-036671 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.122325393s)
functional_test.go:761: restart took 39.123111187s for "functional-036671" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-036671 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 logs
--- PASS: TestFunctional/serial/LogsCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 logs --file /tmp/TestFunctionalserialLogsFileCmd1270350446/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-036671 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-036671
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-036671: exit status 115 (296.915113ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31370 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-036671 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036671 config get cpus: exit status 14 (54.762769ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036671 config get cpus: exit status 14 (54.314057ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-036671 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-036671 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 481280: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.59s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036671 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-036671 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (180.367669ms)

                                                
                                                
-- stdout --
	* [functional-036671] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:12:53.491873  479993 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:12:53.491998  479993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:12:53.492011  479993 out.go:358] Setting ErrFile to fd 2...
	I0829 19:12:53.492018  479993 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:12:53.492229  479993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
	I0829 19:12:53.492889  479993 out.go:352] Setting JSON to false
	I0829 19:12:53.494560  479993 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":78919,"bootTime":1724879854,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:12:53.494638  479993 start.go:139] virtualization: kvm guest
	I0829 19:12:53.496987  479993 out.go:177] * [functional-036671] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 19:12:53.498149  479993 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:12:53.498169  479993 notify.go:220] Checking for updates...
	I0829 19:12:53.500830  479993 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:12:53.502097  479993 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	I0829 19:12:53.503340  479993 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	I0829 19:12:53.504360  479993 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:12:53.505351  479993 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:12:53.507013  479993 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 19:12:53.507788  479993 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:12:53.536275  479993 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 19:12:53.536404  479993 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 19:12:53.612425  479993 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-08-29 19:12:53.596734855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 19:12:53.612582  479993 docker.go:307] overlay module found
	I0829 19:12:53.614672  479993 out.go:177] * Using the docker driver based on existing profile
	I0829 19:12:53.616051  479993 start.go:297] selected driver: docker
	I0829 19:12:53.616075  479993 start.go:901] validating driver "docker" against &{Name:functional-036671 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-036671 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:12:53.616200  479993 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:12:53.619652  479993 out.go:201] 
	W0829 19:12:53.620769  479993 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 19:12:53.621901  479993 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036671 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-036671 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-036671 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (161.385117ms)

                                                
                                                
-- stdout --
	* [functional-036671] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:12:36.968320  476852 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:12:36.968454  476852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:12:36.968465  476852 out.go:358] Setting ErrFile to fd 2...
	I0829 19:12:36.968473  476852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:12:36.968819  476852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
	I0829 19:12:36.969463  476852 out.go:352] Setting JSON to false
	I0829 19:12:36.970685  476852 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-14","uptime":78903,"bootTime":1724879854,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 19:12:36.970748  476852 start.go:139] virtualization: kvm guest
	I0829 19:12:36.973693  476852 out.go:177] * [functional-036671] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0829 19:12:36.975118  476852 out.go:177]   - MINIKUBE_LOCATION=19530
	I0829 19:12:36.975155  476852 notify.go:220] Checking for updates...
	I0829 19:12:36.977785  476852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 19:12:36.979453  476852 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	I0829 19:12:36.980775  476852 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	I0829 19:12:36.982221  476852 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 19:12:36.983661  476852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 19:12:36.985544  476852 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 19:12:36.986153  476852 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 19:12:37.011820  476852 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 19:12:37.013082  476852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 19:12:37.067413  476852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 19:12:37.057326877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 19:12:37.067518  476852 docker.go:307] overlay module found
	I0829 19:12:37.070019  476852 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0829 19:12:37.071500  476852 start.go:297] selected driver: docker
	I0829 19:12:37.071518  476852 start.go:901] validating driver "docker" against &{Name:functional-036671 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724862063-19530@sha256:fd0f41868bf20a720502cce04c5201bfb064f3c267161af6fd5265d69c85c9f0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-036671 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 19:12:37.071630  476852 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 19:12:37.074223  476852 out.go:201] 
	W0829 19:12:37.075703  476852 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 19:12:37.077049  476852 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (18.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-036671 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-036671 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-kl7zm" [73c0212e-916a-4de2-93f6-6500a205c0f7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-kl7zm" [73c0212e-916a-4de2-93f6-6500a205c0f7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 18.004276988s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31402
functional_test.go:1675: http://192.168.49.2:31402: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-kl7zm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31402
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (18.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cdb54595-4fa1-4374-907c-df7c02646f84] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003958773s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-036671 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-036671 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-036671 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-036671 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [dd0b8547-62d2-481f-bb50-fe71b9a4a0d1] Pending
helpers_test.go:344: "sp-pod" [dd0b8547-62d2-481f-bb50-fe71b9a4a0d1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [dd0b8547-62d2-481f-bb50-fe71b9a4a0d1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.02193911s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-036671 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-036671 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-036671 delete -f testdata/storage-provisioner/pod.yaml: (1.769604465s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-036671 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [88e660af-5b84-4d7b-b476-7ba707507ecf] Pending
helpers_test.go:344: "sp-pod" [88e660af-5b84-4d7b-b476-7ba707507ecf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [88e660af-5b84-4d7b-b476-7ba707507ecf] Running
2024/08/29 19:13:03 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003107448s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-036671 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh -n functional-036671 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 cp functional-036671:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd168180560/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh -n functional-036671 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh -n functional-036671 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-036671 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-8qs6n" [a5a87e66-a21b-46ab-9e6e-1dac662f2c56] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-8qs6n" [a5a87e66-a21b-46ab-9e6e-1dac662f2c56] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.023717856s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-036671 exec mysql-6cdb49bbb-8qs6n -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-036671 exec mysql-6cdb49bbb-8qs6n -- mysql -ppassword -e "show databases;": exit status 1 (117.900495ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-036671 exec mysql-6cdb49bbb-8qs6n -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-036671 exec mysql-6cdb49bbb-8qs6n -- mysql -ppassword -e "show databases;": exit status 1 (101.459209ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-036671 exec mysql-6cdb49bbb-8qs6n -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-036671 exec mysql-6cdb49bbb-8qs6n -- mysql -ppassword -e "show databases;": exit status 1 (119.185625ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-036671 exec mysql-6cdb49bbb-8qs6n -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/425496/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo cat /etc/test/nested/copy/425496/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/425496.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo cat /etc/ssl/certs/425496.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/425496.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo cat /usr/share/ca-certificates/425496.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/4254962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo cat /etc/ssl/certs/4254962.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/4254962.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo cat /usr/share/ca-certificates/4254962.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-036671 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036671 ssh "sudo systemctl is-active crio": exit status 1 (271.382362ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-amd64 -p functional-036671 version -o=json --components: (1.016743698s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-036671 docker-env) && out/minikube-linux-amd64 status -p functional-036671"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-036671 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "313.890465ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "65.643806ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-036671 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-036671
docker.io/kicbase/echo-server:functional-036671
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036671 image ls --format short --alsologtostderr:
I0829 19:12:56.531559  481949 out.go:345] Setting OutFile to fd 1 ...
I0829 19:12:56.531686  481949 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:12:56.531699  481949 out.go:358] Setting ErrFile to fd 2...
I0829 19:12:56.531705  481949 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:12:56.531864  481949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
I0829 19:12:56.532432  481949 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:12:56.532529  481949 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:12:56.533058  481949 cli_runner.go:164] Run: docker container inspect functional-036671 --format={{.State.Status}}
I0829 19:12:56.550485  481949 ssh_runner.go:195] Run: systemctl --version
I0829 19:12:56.550550  481949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036671
I0829 19:12:56.567410  481949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/functional-036671/id_rsa Username:docker}
I0829 19:12:56.651999  481949 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-036671 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| localhost/my-image                          | functional-036671 | a438bd357f2f6 | 1.24MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/nginx                     | alpine            | 0f0eda053dc5c | 43.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| docker.io/kicbase/echo-server               | functional-036671 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-036671 | a9885894bdaeb | 30B    |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036671 image ls --format table --alsologtostderr:
I0829 19:13:00.611625  482597 out.go:345] Setting OutFile to fd 1 ...
I0829 19:13:00.611803  482597 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:13:00.611816  482597 out.go:358] Setting ErrFile to fd 2...
I0829 19:13:00.611822  482597 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:13:00.612006  482597 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
I0829 19:13:00.612959  482597 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:13:00.613132  482597 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:13:00.613840  482597 cli_runner.go:164] Run: docker container inspect functional-036671 --format={{.State.Status}}
I0829 19:13:00.635562  482597 ssh_runner.go:195] Run: systemctl --version
I0829 19:13:00.635628  482597 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036671
I0829 19:13:00.653881  482597 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/functional-036671/id_rsa Username:docker}
I0829 19:13:00.755172  482597 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-036671 image ls --format json --alsologtostderr:
[{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-036671"],"size":"4940000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f
2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"a9885894bdaeb3a25b67cb345f796e7ea3d45c348a3433dc5ed3b59460bf59b8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-036671"],"size":"30"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"
repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"a438bd357f2f60ce1abd388e05582717e8051e6ca7b96728216bf5e5c46ef9eb","repoDigests":[],"repoTags":["localhost/my-image:functional-036671"],"size":"1240000"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"1766f5
4c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036671 image ls --format json --alsologtostderr:
I0829 19:13:00.386724  482550 out.go:345] Setting OutFile to fd 1 ...
I0829 19:13:00.387030  482550 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:13:00.387043  482550 out.go:358] Setting ErrFile to fd 2...
I0829 19:13:00.387049  482550 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:13:00.387386  482550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
I0829 19:13:00.388091  482550 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:13:00.388200  482550 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:13:00.388622  482550 cli_runner.go:164] Run: docker container inspect functional-036671 --format={{.State.Status}}
I0829 19:13:00.407999  482550 ssh_runner.go:195] Run: systemctl --version
I0829 19:13:00.408073  482550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036671
I0829 19:13:00.430226  482550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/functional-036671/id_rsa Username:docker}
I0829 19:13:00.531845  482550 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-036671 image ls --format yaml --alsologtostderr:
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-036671
size: "4940000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a9885894bdaeb3a25b67cb345f796e7ea3d45c348a3433dc5ed3b59460bf59b8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-036671
size: "30"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036671 image ls --format yaml --alsologtostderr:
I0829 19:12:56.752425  482015 out.go:345] Setting OutFile to fd 1 ...
I0829 19:12:56.753238  482015 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:12:56.753262  482015 out.go:358] Setting ErrFile to fd 2...
I0829 19:12:56.753268  482015 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:12:56.753542  482015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
I0829 19:12:56.754188  482015 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:12:56.754319  482015 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:12:56.754767  482015 cli_runner.go:164] Run: docker container inspect functional-036671 --format={{.State.Status}}
I0829 19:12:56.771374  482015 ssh_runner.go:195] Run: systemctl --version
I0829 19:12:56.771418  482015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036671
I0829 19:12:56.791630  482015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/functional-036671/id_rsa Username:docker}
I0829 19:12:56.927993  482015 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036671 ssh pgrep buildkitd: exit status 1 (248.684437ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image build -t localhost/my-image:functional-036671 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-036671 image build -t localhost/my-image:functional-036671 testdata/build --alsologtostderr: (2.877212002s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-036671 image build -t localhost/my-image:functional-036671 testdata/build --alsologtostderr:
I0829 19:12:57.294130  482154 out.go:345] Setting OutFile to fd 1 ...
I0829 19:12:57.294380  482154 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:12:57.294388  482154 out.go:358] Setting ErrFile to fd 2...
I0829 19:12:57.294393  482154 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 19:12:57.294584  482154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
I0829 19:12:57.295432  482154 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:12:57.296066  482154 config.go:182] Loaded profile config "functional-036671": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 19:12:57.296567  482154 cli_runner.go:164] Run: docker container inspect functional-036671 --format={{.State.Status}}
I0829 19:12:57.315639  482154 ssh_runner.go:195] Run: systemctl --version
I0829 19:12:57.315709  482154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-036671
I0829 19:12:57.334123  482154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/functional-036671/id_rsa Username:docker}
I0829 19:12:57.447710  482154 build_images.go:161] Building image from path: /tmp/build.1117452516.tar
I0829 19:12:57.447790  482154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0829 19:12:57.476119  482154 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1117452516.tar
I0829 19:12:57.480649  482154 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1117452516.tar: stat -c "%s %y" /var/lib/minikube/build/build.1117452516.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1117452516.tar': No such file or directory
I0829 19:12:57.480682  482154 ssh_runner.go:362] scp /tmp/build.1117452516.tar --> /var/lib/minikube/build/build.1117452516.tar (3072 bytes)
I0829 19:12:57.503182  482154 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1117452516
I0829 19:12:57.511011  482154 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1117452516 -xf /var/lib/minikube/build/build.1117452516.tar
I0829 19:12:57.520092  482154 docker.go:360] Building image: /var/lib/minikube/build/build.1117452516
I0829 19:12:57.520181  482154 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-036671 /var/lib/minikube/build/build.1117452516
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:a438bd357f2f60ce1abd388e05582717e8051e6ca7b96728216bf5e5c46ef9eb done
#8 naming to localhost/my-image:functional-036671 done
#8 DONE 0.0s
I0829 19:13:00.093575  482154 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-036671 /var/lib/minikube/build/build.1117452516: (2.57335338s)
I0829 19:13:00.093654  482154 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1117452516
I0829 19:13:00.104198  482154 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1117452516.tar
I0829 19:13:00.113078  482154 build_images.go:217] Built localhost/my-image:functional-036671 from /tmp/build.1117452516.tar
I0829 19:13:00.113114  482154 build_images.go:133] succeeded building to: functional-036671
I0829 19:13:00.113119  482154 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-036671
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "322.583619ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "63.020475ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image load --daemon kicbase/echo-server:functional-036671 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-036671 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-036671 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-036671 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-036671 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 474638: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-036671 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-036671 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3c79c581-dc5d-4c91-a381-dc89a7c3bb93] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3c79c581-dc5d-4c91-a381-dc89a7c3bb93] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003738243s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image load --daemon kicbase/echo-server:functional-036671 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-036671
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image load --daemon kicbase/echo-server:functional-036671 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image save kicbase/echo-server:functional-036671 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image rm kicbase/echo-server:functional-036671 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-036671
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 image save --daemon kicbase/echo-server:functional-036671 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-036671
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-036671 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.144.177 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-036671 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdany-port2181546114/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724958757082890099" to /tmp/TestFunctionalparallelMountCmdany-port2181546114/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724958757082890099" to /tmp/TestFunctionalparallelMountCmdany-port2181546114/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724958757082890099" to /tmp/TestFunctionalparallelMountCmdany-port2181546114/001/test-1724958757082890099
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.340013ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 29 19:12 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 29 19:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 29 19:12 test-1724958757082890099
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh cat /mount-9p/test-1724958757082890099
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-036671 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e1032383-bd01-474e-8db1-cf2c1aa39243] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e1032383-bd01-474e-8db1-cf2c1aa39243] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e1032383-bd01-474e-8db1-cf2c1aa39243] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 12.0036579s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-036671 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdany-port2181546114/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdspecific-port986069298/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.885983ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdspecific-port986069298/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036671 ssh "sudo umount -f /mount-9p": exit status 1 (243.655829ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-036671 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdspecific-port986069298/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1343125735/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1343125735/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1343125735/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T" /mount1: exit status 1 (328.543146ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-036671 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1343125735/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1343125735/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-036671 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1343125735/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-036671 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-036671 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-r6r8z" [00d7d291-b38c-4600-9ae6-2fe9065425ef] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-r6r8z" [00d7d291-b38c-4600-9ae6-2fe9065425ef] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003411235s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 service list
functional_test.go:1459: (dbg) Done: out/minikube-linux-amd64 -p functional-036671 service list: (1.655233178s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-036671 service list -o json: (1.660845816s)
functional_test.go:1494: Took "1.660951492s" to run "out/minikube-linux-amd64 -p functional-036671 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31196
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-036671 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31196
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-036671
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-036671
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-036671
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (106.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-590117 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0829 19:14:20.688157  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:20.695326  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:20.706724  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:20.728175  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:20.769589  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:20.851020  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:21.012570  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:21.334298  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:21.976030  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:23.258325  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:25.821599  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:30.943156  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:14:41.185099  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-590117 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m45.901136293s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (106.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- rollout status deployment/busybox
E0829 19:15:01.667179  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-590117 -- rollout status deployment/busybox: (4.010253671s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-28p25 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-hswq5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-kvjbt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-28p25 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-hswq5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-kvjbt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-28p25 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-hswq5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-kvjbt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-28p25 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-28p25 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-hswq5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-hswq5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-kvjbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-590117 -- exec busybox-7dff88458-kvjbt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-590117 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-590117 -v=7 --alsologtostderr: (19.921073708s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-590117 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp testdata/cp-test.txt ha-590117:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3468123249/001/cp-test_ha-590117.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117:/home/docker/cp-test.txt ha-590117-m02:/home/docker/cp-test_ha-590117_ha-590117-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m02 "sudo cat /home/docker/cp-test_ha-590117_ha-590117-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117:/home/docker/cp-test.txt ha-590117-m03:/home/docker/cp-test_ha-590117_ha-590117-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m03 "sudo cat /home/docker/cp-test_ha-590117_ha-590117-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117:/home/docker/cp-test.txt ha-590117-m04:/home/docker/cp-test_ha-590117_ha-590117-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m04 "sudo cat /home/docker/cp-test_ha-590117_ha-590117-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp testdata/cp-test.txt ha-590117-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3468123249/001/cp-test_ha-590117-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m02:/home/docker/cp-test.txt ha-590117:/home/docker/cp-test_ha-590117-m02_ha-590117.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117 "sudo cat /home/docker/cp-test_ha-590117-m02_ha-590117.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m02:/home/docker/cp-test.txt ha-590117-m03:/home/docker/cp-test_ha-590117-m02_ha-590117-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m03 "sudo cat /home/docker/cp-test_ha-590117-m02_ha-590117-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m02:/home/docker/cp-test.txt ha-590117-m04:/home/docker/cp-test_ha-590117-m02_ha-590117-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m04 "sudo cat /home/docker/cp-test_ha-590117-m02_ha-590117-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp testdata/cp-test.txt ha-590117-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3468123249/001/cp-test_ha-590117-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m03:/home/docker/cp-test.txt ha-590117:/home/docker/cp-test_ha-590117-m03_ha-590117.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117 "sudo cat /home/docker/cp-test_ha-590117-m03_ha-590117.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m03:/home/docker/cp-test.txt ha-590117-m02:/home/docker/cp-test_ha-590117-m03_ha-590117-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m02 "sudo cat /home/docker/cp-test_ha-590117-m03_ha-590117-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m03:/home/docker/cp-test.txt ha-590117-m04:/home/docker/cp-test_ha-590117-m03_ha-590117-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m04 "sudo cat /home/docker/cp-test_ha-590117-m03_ha-590117-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp testdata/cp-test.txt ha-590117-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3468123249/001/cp-test_ha-590117-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m04:/home/docker/cp-test.txt ha-590117:/home/docker/cp-test_ha-590117-m04_ha-590117.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117 "sudo cat /home/docker/cp-test_ha-590117-m04_ha-590117.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m04:/home/docker/cp-test.txt ha-590117-m02:/home/docker/cp-test_ha-590117-m04_ha-590117-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m02 "sudo cat /home/docker/cp-test_ha-590117-m04_ha-590117-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 cp ha-590117-m04:/home/docker/cp-test.txt ha-590117-m03:/home/docker/cp-test_ha-590117-m04_ha-590117-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 ssh -n ha-590117-m03 "sudo cat /home/docker/cp-test_ha-590117-m04_ha-590117-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 node stop m02 -v=7 --alsologtostderr
E0829 19:15:42.628682  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-590117 node stop m02 -v=7 --alsologtostderr: (10.717525797s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr: exit status 7 (628.121566ms)

                                                
                                                
-- stdout --
	ha-590117
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-590117-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-590117-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-590117-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:15:52.442214  510243 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:15:52.442507  510243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:15:52.442526  510243 out.go:358] Setting ErrFile to fd 2...
	I0829 19:15:52.442533  510243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:15:52.442727  510243 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
	I0829 19:15:52.442997  510243 out.go:352] Setting JSON to false
	I0829 19:15:52.443036  510243 mustload.go:65] Loading cluster: ha-590117
	I0829 19:15:52.443168  510243 notify.go:220] Checking for updates...
	I0829 19:15:52.443467  510243 config.go:182] Loaded profile config "ha-590117": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 19:15:52.443493  510243 status.go:255] checking status of ha-590117 ...
	I0829 19:15:52.444042  510243 cli_runner.go:164] Run: docker container inspect ha-590117 --format={{.State.Status}}
	I0829 19:15:52.460531  510243 status.go:330] ha-590117 host status = "Running" (err=<nil>)
	I0829 19:15:52.460564  510243 host.go:66] Checking if "ha-590117" exists ...
	I0829 19:15:52.460819  510243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-590117
	I0829 19:15:52.480587  510243 host.go:66] Checking if "ha-590117" exists ...
	I0829 19:15:52.480848  510243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:15:52.480907  510243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-590117
	I0829 19:15:52.497607  510243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/ha-590117/id_rsa Username:docker}
	I0829 19:15:52.584287  510243 ssh_runner.go:195] Run: systemctl --version
	I0829 19:15:52.588853  510243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:15:52.599956  510243 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 19:15:52.646118  510243 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-29 19:15:52.636130286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 19:15:52.646896  510243 kubeconfig.go:125] found "ha-590117" server: "https://192.168.49.254:8443"
	I0829 19:15:52.646935  510243 api_server.go:166] Checking apiserver status ...
	I0829 19:15:52.646986  510243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:15:52.657999  510243 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2441/cgroup
	I0829 19:15:52.666893  510243 api_server.go:182] apiserver freezer: "4:freezer:/docker/6b908c52163e2d02e74142c5c24a4debbfee280d1fd45144d313041d52cf16b3/kubepods/burstable/podcd54c64a8c8d20d5535357d0ed6a71b3/00da34f522bd2f27cec5e25e59cece5d0ac9f5a3b875f5c86b947810316f3c5f"
	I0829 19:15:52.666957  510243 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6b908c52163e2d02e74142c5c24a4debbfee280d1fd45144d313041d52cf16b3/kubepods/burstable/podcd54c64a8c8d20d5535357d0ed6a71b3/00da34f522bd2f27cec5e25e59cece5d0ac9f5a3b875f5c86b947810316f3c5f/freezer.state
	I0829 19:15:52.674476  510243 api_server.go:204] freezer state: "THAWED"
	I0829 19:15:52.674505  510243 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0829 19:15:52.678218  510243 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0829 19:15:52.678238  510243 status.go:422] ha-590117 apiserver status = Running (err=<nil>)
	I0829 19:15:52.678248  510243 status.go:257] ha-590117 status: &{Name:ha-590117 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:15:52.678270  510243 status.go:255] checking status of ha-590117-m02 ...
	I0829 19:15:52.678499  510243 cli_runner.go:164] Run: docker container inspect ha-590117-m02 --format={{.State.Status}}
	I0829 19:15:52.695358  510243 status.go:330] ha-590117-m02 host status = "Stopped" (err=<nil>)
	I0829 19:15:52.695379  510243 status.go:343] host is not running, skipping remaining checks
	I0829 19:15:52.695386  510243 status.go:257] ha-590117-m02 status: &{Name:ha-590117-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:15:52.695405  510243 status.go:255] checking status of ha-590117-m03 ...
	I0829 19:15:52.695654  510243 cli_runner.go:164] Run: docker container inspect ha-590117-m03 --format={{.State.Status}}
	I0829 19:15:52.712425  510243 status.go:330] ha-590117-m03 host status = "Running" (err=<nil>)
	I0829 19:15:52.712449  510243 host.go:66] Checking if "ha-590117-m03" exists ...
	I0829 19:15:52.712696  510243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-590117-m03
	I0829 19:15:52.730150  510243 host.go:66] Checking if "ha-590117-m03" exists ...
	I0829 19:15:52.730447  510243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:15:52.730494  510243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-590117-m03
	I0829 19:15:52.748647  510243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/ha-590117-m03/id_rsa Username:docker}
	I0829 19:15:52.835539  510243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:15:52.846122  510243 kubeconfig.go:125] found "ha-590117" server: "https://192.168.49.254:8443"
	I0829 19:15:52.846147  510243 api_server.go:166] Checking apiserver status ...
	I0829 19:15:52.846179  510243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:15:52.856224  510243 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2255/cgroup
	I0829 19:15:52.864355  510243 api_server.go:182] apiserver freezer: "4:freezer:/docker/56e63c2b67b1f55f87a04a803b7c56ce867488b04ef6e2f104d0528779a2e2e3/kubepods/burstable/pod2127b6b09ce0fe36d5912fc128015b3b/715a3ee1b43b1494d848d14e76fbff13ade2b00856e4716974701f8d46e20993"
	I0829 19:15:52.864412  510243 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/56e63c2b67b1f55f87a04a803b7c56ce867488b04ef6e2f104d0528779a2e2e3/kubepods/burstable/pod2127b6b09ce0fe36d5912fc128015b3b/715a3ee1b43b1494d848d14e76fbff13ade2b00856e4716974701f8d46e20993/freezer.state
	I0829 19:15:52.871820  510243 api_server.go:204] freezer state: "THAWED"
	I0829 19:15:52.871843  510243 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0829 19:15:52.876082  510243 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0829 19:15:52.876108  510243 status.go:422] ha-590117-m03 apiserver status = Running (err=<nil>)
	I0829 19:15:52.876119  510243 status.go:257] ha-590117-m03 status: &{Name:ha-590117-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:15:52.876143  510243 status.go:255] checking status of ha-590117-m04 ...
	I0829 19:15:52.876407  510243 cli_runner.go:164] Run: docker container inspect ha-590117-m04 --format={{.State.Status}}
	I0829 19:15:52.894884  510243 status.go:330] ha-590117-m04 host status = "Running" (err=<nil>)
	I0829 19:15:52.894909  510243 host.go:66] Checking if "ha-590117-m04" exists ...
	I0829 19:15:52.895196  510243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-590117-m04
	I0829 19:15:52.912882  510243 host.go:66] Checking if "ha-590117-m04" exists ...
	I0829 19:15:52.913155  510243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:15:52.913255  510243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-590117-m04
	I0829 19:15:52.930318  510243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/ha-590117-m04/id_rsa Username:docker}
	I0829 19:15:53.015554  510243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:15:53.025415  510243 status.go:257] ha-590117-m04 status: &{Name:ha-590117-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-590117 node start m02 -v=7 --alsologtostderr: (18.204994282s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr: (1.242545229s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.449292688s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (232.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-590117 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-590117 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-590117 -v=7 --alsologtostderr: (33.61558062s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-590117 --wait=true -v=7 --alsologtostderr
E0829 19:17:04.551651  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:28.263767  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:28.270174  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:28.281526  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:28.303147  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:28.344572  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:28.426056  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:28.587527  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:28.909197  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:29.551244  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:30.832965  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:33.395057  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:38.516799  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:17:48.758708  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:09.240819  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:18:50.202626  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:20.688018  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:19:48.393004  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-590117 --wait=true -v=7 --alsologtostderr: (3m18.709374289s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-590117
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (232.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 node delete m03 -v=7 --alsologtostderr
E0829 19:20:12.124527  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-590117 node delete m03 -v=7 --alsologtostderr: (8.615995463s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-590117 stop -v=7 --alsologtostderr: (32.247470109s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr: exit status 7 (98.845583ms)

                                                
                                                
-- stdout --
	ha-590117
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-590117-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-590117-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:20:50.992403  542464 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:20:50.992528  542464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:20:50.992538  542464 out.go:358] Setting ErrFile to fd 2...
	I0829 19:20:50.992542  542464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:20:50.992740  542464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
	I0829 19:20:50.992883  542464 out.go:352] Setting JSON to false
	I0829 19:20:50.992909  542464 mustload.go:65] Loading cluster: ha-590117
	I0829 19:20:50.992963  542464 notify.go:220] Checking for updates...
	I0829 19:20:50.993428  542464 config.go:182] Loaded profile config "ha-590117": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 19:20:50.993451  542464 status.go:255] checking status of ha-590117 ...
	I0829 19:20:50.993861  542464 cli_runner.go:164] Run: docker container inspect ha-590117 --format={{.State.Status}}
	I0829 19:20:51.012954  542464 status.go:330] ha-590117 host status = "Stopped" (err=<nil>)
	I0829 19:20:51.012980  542464 status.go:343] host is not running, skipping remaining checks
	I0829 19:20:51.012988  542464 status.go:257] ha-590117 status: &{Name:ha-590117 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:20:51.013016  542464 status.go:255] checking status of ha-590117-m02 ...
	I0829 19:20:51.013343  542464 cli_runner.go:164] Run: docker container inspect ha-590117-m02 --format={{.State.Status}}
	I0829 19:20:51.031337  542464 status.go:330] ha-590117-m02 host status = "Stopped" (err=<nil>)
	I0829 19:20:51.031374  542464 status.go:343] host is not running, skipping remaining checks
	I0829 19:20:51.031384  542464 status.go:257] ha-590117-m02 status: &{Name:ha-590117-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:20:51.031407  542464 status.go:255] checking status of ha-590117-m04 ...
	I0829 19:20:51.031650  542464 cli_runner.go:164] Run: docker container inspect ha-590117-m04 --format={{.State.Status}}
	I0829 19:20:51.048375  542464 status.go:330] ha-590117-m04 host status = "Stopped" (err=<nil>)
	I0829 19:20:51.048408  542464 status.go:343] host is not running, skipping remaining checks
	I0829 19:20:51.048416  542464 status.go:257] ha-590117-m04 status: &{Name:ha-590117-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (92.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-590117 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-590117 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m31.526723401s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (92.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-590117 --control-plane -v=7 --alsologtostderr
E0829 19:22:28.264381  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:22:55.966871  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-590117 --control-plane -v=7 --alsologtostderr: (38.805981342s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-590117 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.62s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (24.02s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-305905 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-305905 --driver=docker  --container-runtime=docker: (24.018389529s)
--- PASS: TestImageBuild/serial/Setup (24.02s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-305905
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-305905: (1.22437716s)
--- PASS: TestImageBuild/serial/NormalBuild (1.22s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-305905
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-305905
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-305905
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.55s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.33s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-420999 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-420999 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (37.328949615s)
--- PASS: TestJSONOutput/start/Command (37.33s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-420999 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.42s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-420999 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.42s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-420999 --output=json --user=testUser
E0829 19:24:20.688079  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-420999 --output=json --user=testUser: (10.880434922s)
--- PASS: TestJSONOutput/stop/Command (10.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-050584 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-050584 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.344397ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c875cd50-e6b2-49db-951a-4cbcaec337ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-050584] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"45961756-1c9c-4eaa-adeb-60ac52703edc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19530"}}
	{"specversion":"1.0","id":"59ad02a5-5ed4-4ce9-bef6-0d8c99fe7557","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9c18247f-06b8-4dad-84da-22ba30abaebe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig"}}
	{"specversion":"1.0","id":"cd12b866-b5d7-4a33-aaf1-7df2c3014171","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube"}}
	{"specversion":"1.0","id":"3fa544fc-b8ee-456a-9894-a892f315cbee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e4d86ff3-c083-4a06-913b-dec94c4d01a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b8463049-5a88-488e-8230-92654b5e7c98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-050584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-050584
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-959078 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-959078 --network=: (20.873971543s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-959078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-959078
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-959078: (2.101059222s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.99s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.17s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-079117 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-079117 --network=bridge: (24.259340045s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-079117" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-079117
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-079117: (1.895564598s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.17s)

                                                
                                    
x
+
TestKicExistingNetwork (25.24s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-945437 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-945437 --network=existing-network: (23.259271984s)
helpers_test.go:175: Cleaning up "existing-network-945437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-945437
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-945437: (1.838032122s)
--- PASS: TestKicExistingNetwork (25.24s)

                                                
                                    
x
+
TestKicCustomSubnet (23.21s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-143048 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-143048 --subnet=192.168.60.0/24: (21.15778212s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-143048 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-143048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-143048
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-143048: (2.032913799s)
--- PASS: TestKicCustomSubnet (23.21s)

                                                
                                    
x
+
TestKicStaticIP (27.21s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-333936 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-333936 --static-ip=192.168.200.200: (25.039202252s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-333936 ip
helpers_test.go:175: Cleaning up "static-ip-333936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-333936
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-333936: (2.059778494s)
--- PASS: TestKicStaticIP (27.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (47.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-201896 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-201896 --driver=docker  --container-runtime=docker: (21.566219595s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-204883 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-204883 --driver=docker  --container-runtime=docker: (21.422279781s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-201896
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-204883
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-204883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-204883
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-204883: (1.954527128s)
helpers_test.go:175: Cleaning up "first-201896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-201896
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-201896: (1.99049047s)
--- PASS: TestMinikubeProfile (47.97s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-950391 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0829 19:27:28.264100  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-950391 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.547701553s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-950391 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-962808 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-962808 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.283077398s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-962808 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.43s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-950391 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-950391 --alsologtostderr -v=5: (1.432459514s)
--- PASS: TestMountStart/serial/DeleteFirst (1.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-962808 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-962808
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-962808: (1.168058106s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-962808
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-962808: (6.733649971s)
--- PASS: TestMountStart/serial/RestartStopped (7.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-962808 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (57.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-364286 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-364286 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.072106341s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (57.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (40.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-364286 -- rollout status deployment/busybox: (2.349731451s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0829 19:29:20.687858  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-ftd4c -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-l4ts9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-ftd4c -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-l4ts9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-ftd4c -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-l4ts9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (40.32s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-ftd4c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-ftd4c -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-l4ts9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-364286 -- exec busybox-7dff88458-l4ts9 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-364286 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-364286 -v 3 --alsologtostderr: (17.309642222s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.89s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-364286 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp testdata/cp-test.txt multinode-364286:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp multinode-364286:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2315940141/001/cp-test_multinode-364286.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp multinode-364286:/home/docker/cp-test.txt multinode-364286-m02:/home/docker/cp-test_multinode-364286_multinode-364286-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m02 "sudo cat /home/docker/cp-test_multinode-364286_multinode-364286-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp multinode-364286:/home/docker/cp-test.txt multinode-364286-m03:/home/docker/cp-test_multinode-364286_multinode-364286-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m03 "sudo cat /home/docker/cp-test_multinode-364286_multinode-364286-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp testdata/cp-test.txt multinode-364286-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp multinode-364286-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2315940141/001/cp-test_multinode-364286-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp multinode-364286-m02:/home/docker/cp-test.txt multinode-364286:/home/docker/cp-test_multinode-364286-m02_multinode-364286.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286 "sudo cat /home/docker/cp-test_multinode-364286-m02_multinode-364286.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp multinode-364286-m02:/home/docker/cp-test.txt multinode-364286-m03:/home/docker/cp-test_multinode-364286-m02_multinode-364286-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m03 "sudo cat /home/docker/cp-test_multinode-364286-m02_multinode-364286-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp testdata/cp-test.txt multinode-364286-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp multinode-364286-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2315940141/001/cp-test_multinode-364286-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp multinode-364286-m03:/home/docker/cp-test.txt multinode-364286:/home/docker/cp-test_multinode-364286-m03_multinode-364286.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286 "sudo cat /home/docker/cp-test_multinode-364286-m03_multinode-364286.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 cp multinode-364286-m03:/home/docker/cp-test.txt multinode-364286-m02:/home/docker/cp-test_multinode-364286-m03_multinode-364286-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 ssh -n multinode-364286-m02 "sudo cat /home/docker/cp-test_multinode-364286-m03_multinode-364286-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-364286 node stop m03: (1.17908462s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-364286 status: exit status 7 (459.144544ms)

                                                
                                                
-- stdout --
	multinode-364286
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-364286-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-364286-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-364286 status --alsologtostderr: exit status 7 (460.051578ms)

                                                
                                                
-- stdout --
	multinode-364286
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-364286-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-364286-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:30:03.541517  628457 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:30:03.541633  628457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:30:03.541642  628457 out.go:358] Setting ErrFile to fd 2...
	I0829 19:30:03.541647  628457 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:30:03.541834  628457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
	I0829 19:30:03.542012  628457 out.go:352] Setting JSON to false
	I0829 19:30:03.542043  628457 mustload.go:65] Loading cluster: multinode-364286
	I0829 19:30:03.542111  628457 notify.go:220] Checking for updates...
	I0829 19:30:03.542566  628457 config.go:182] Loaded profile config "multinode-364286": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 19:30:03.542590  628457 status.go:255] checking status of multinode-364286 ...
	I0829 19:30:03.543139  628457 cli_runner.go:164] Run: docker container inspect multinode-364286 --format={{.State.Status}}
	I0829 19:30:03.560628  628457 status.go:330] multinode-364286 host status = "Running" (err=<nil>)
	I0829 19:30:03.560667  628457 host.go:66] Checking if "multinode-364286" exists ...
	I0829 19:30:03.560999  628457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-364286
	I0829 19:30:03.578138  628457 host.go:66] Checking if "multinode-364286" exists ...
	I0829 19:30:03.578463  628457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:30:03.578537  628457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-364286
	I0829 19:30:03.596099  628457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/multinode-364286/id_rsa Username:docker}
	I0829 19:30:03.688177  628457 ssh_runner.go:195] Run: systemctl --version
	I0829 19:30:03.692545  628457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:30:03.704082  628457 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 19:30:03.753140  628457 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-29 19:30:03.743898515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-14 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 19:30:03.753808  628457 kubeconfig.go:125] found "multinode-364286" server: "https://192.168.67.2:8443"
	I0829 19:30:03.753837  628457 api_server.go:166] Checking apiserver status ...
	I0829 19:30:03.753894  628457 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 19:30:03.765194  628457 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2387/cgroup
	I0829 19:30:03.774743  628457 api_server.go:182] apiserver freezer: "4:freezer:/docker/48dea258334e3efbe0fbb62ff0fe4f5282f15a60a7269c985c3eec68045f6fb7/kubepods/burstable/pod0fb4ed7e6179e37c1baecd5b013ec770/f0baa58f4cb81710a5b62c963c10aeb8e07718cf9fd42e8b9522d1459b77ec0e"
	I0829 19:30:03.774858  628457 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/48dea258334e3efbe0fbb62ff0fe4f5282f15a60a7269c985c3eec68045f6fb7/kubepods/burstable/pod0fb4ed7e6179e37c1baecd5b013ec770/f0baa58f4cb81710a5b62c963c10aeb8e07718cf9fd42e8b9522d1459b77ec0e/freezer.state
	I0829 19:30:03.782918  628457 api_server.go:204] freezer state: "THAWED"
	I0829 19:30:03.782953  628457 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0829 19:30:03.786632  628457 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0829 19:30:03.786656  628457 status.go:422] multinode-364286 apiserver status = Running (err=<nil>)
	I0829 19:30:03.786671  628457 status.go:257] multinode-364286 status: &{Name:multinode-364286 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:30:03.786693  628457 status.go:255] checking status of multinode-364286-m02 ...
	I0829 19:30:03.787039  628457 cli_runner.go:164] Run: docker container inspect multinode-364286-m02 --format={{.State.Status}}
	I0829 19:30:03.804513  628457 status.go:330] multinode-364286-m02 host status = "Running" (err=<nil>)
	I0829 19:30:03.804543  628457 host.go:66] Checking if "multinode-364286-m02" exists ...
	I0829 19:30:03.804818  628457 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-364286-m02
	I0829 19:30:03.822183  628457 host.go:66] Checking if "multinode-364286-m02" exists ...
	I0829 19:30:03.822501  628457 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 19:30:03.822540  628457 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-364286-m02
	I0829 19:30:03.840038  628457 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/19530-418716/.minikube/machines/multinode-364286-m02/id_rsa Username:docker}
	I0829 19:30:03.928376  628457 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 19:30:03.939305  628457 status.go:257] multinode-364286-m02 status: &{Name:multinode-364286-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:30:03.939339  628457 status.go:255] checking status of multinode-364286-m03 ...
	I0829 19:30:03.939676  628457 cli_runner.go:164] Run: docker container inspect multinode-364286-m03 --format={{.State.Status}}
	I0829 19:30:03.956873  628457 status.go:330] multinode-364286-m03 host status = "Stopped" (err=<nil>)
	I0829 19:30:03.956895  628457 status.go:343] host is not running, skipping remaining checks
	I0829 19:30:03.956908  628457 status.go:257] multinode-364286-m03 status: &{Name:multinode-364286-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-364286 node start m03 -v=7 --alsologtostderr: (9.089524088s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-364286
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-364286
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-364286: (22.372252241s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-364286 --wait=true -v=8 --alsologtostderr
E0829 19:30:43.755046  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-364286 --wait=true -v=8 --alsologtostderr: (1m16.223026486s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-364286
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-364286 node delete m03: (4.595110175s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-364286 stop: (21.201622523s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-364286 status: exit status 7 (79.575704ms)

                                                
                                                
-- stdout --
	multinode-364286
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-364286-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-364286 status --alsologtostderr: exit status 7 (78.69961ms)

                                                
                                                
-- stdout --
	multinode-364286
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-364286-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 19:32:18.838609  643712 out.go:345] Setting OutFile to fd 1 ...
	I0829 19:32:18.838885  643712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:32:18.838895  643712 out.go:358] Setting ErrFile to fd 2...
	I0829 19:32:18.838900  643712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 19:32:18.839143  643712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19530-418716/.minikube/bin
	I0829 19:32:18.839354  643712 out.go:352] Setting JSON to false
	I0829 19:32:18.839384  643712 mustload.go:65] Loading cluster: multinode-364286
	I0829 19:32:18.839497  643712 notify.go:220] Checking for updates...
	I0829 19:32:18.839841  643712 config.go:182] Loaded profile config "multinode-364286": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 19:32:18.839858  643712 status.go:255] checking status of multinode-364286 ...
	I0829 19:32:18.840282  643712 cli_runner.go:164] Run: docker container inspect multinode-364286 --format={{.State.Status}}
	I0829 19:32:18.857490  643712 status.go:330] multinode-364286 host status = "Stopped" (err=<nil>)
	I0829 19:32:18.857510  643712 status.go:343] host is not running, skipping remaining checks
	I0829 19:32:18.857518  643712 status.go:257] multinode-364286 status: &{Name:multinode-364286 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 19:32:18.857544  643712 status.go:255] checking status of multinode-364286-m02 ...
	I0829 19:32:18.857802  643712 cli_runner.go:164] Run: docker container inspect multinode-364286-m02 --format={{.State.Status}}
	I0829 19:32:18.874169  643712 status.go:330] multinode-364286-m02 host status = "Stopped" (err=<nil>)
	I0829 19:32:18.874196  643712 status.go:343] host is not running, skipping remaining checks
	I0829 19:32:18.874203  643712 status.go:257] multinode-364286-m02 status: &{Name:multinode-364286-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.36s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-364286 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0829 19:32:28.264103  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-364286 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (57.274893655s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-364286 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-364286
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-364286-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-364286-m02 --driver=docker  --container-runtime=docker: exit status 14 (61.391378ms)

                                                
                                                
-- stdout --
	* [multinode-364286-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-364286-m02' is duplicated with machine name 'multinode-364286-m02' in profile 'multinode-364286'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-364286-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-364286-m03 --driver=docker  --container-runtime=docker: (22.332681333s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-364286
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-364286: exit status 80 (254.379853ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-364286 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-364286-m03 already exists in multinode-364286-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-364286-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-364286-m03: (2.022336332s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.72s)

                                                
                                    
x
+
TestPreload (85.62s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-817386 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0829 19:33:51.329156  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:34:20.687740  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-817386 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (50.903702409s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-817386 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-817386 image pull gcr.io/k8s-minikube/busybox: (1.300257853s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-817386
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-817386: (10.759039233s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-817386 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-817386 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (20.229962627s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-817386 image list
helpers_test.go:175: Cleaning up "test-preload-817386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-817386
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-817386: (2.21858905s)
--- PASS: TestPreload (85.62s)

                                                
                                    
x
+
TestScheduledStopUnix (94.55s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-976353 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-976353 --memory=2048 --driver=docker  --container-runtime=docker: (21.706623948s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-976353 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-976353 -n scheduled-stop-976353
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-976353 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-976353 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-976353 -n scheduled-stop-976353
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-976353
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-976353 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-976353
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-976353: exit status 7 (61.616901ms)

                                                
                                                
-- stdout --
	scheduled-stop-976353
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-976353 -n scheduled-stop-976353
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-976353 -n scheduled-stop-976353: exit status 7 (61.169477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-976353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-976353
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-976353: (1.621537263s)
--- PASS: TestScheduledStopUnix (94.55s)

                                                
                                    
x
+
TestSkaffold (95.5s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe529655546 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-547456 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-547456 --memory=2600 --driver=docker  --container-runtime=docker: (21.086837773s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe529655546 run --minikube-profile skaffold-547456 --kube-context skaffold-547456 --status-check=true --port-forward=false --interactive=false
E0829 19:37:28.264366  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe529655546 run --minikube-profile skaffold-547456 --kube-context skaffold-547456 --status-check=true --port-forward=false --interactive=false: (1m0.058906203s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5d9967696-v2spt" [d8bbca18-ce28-4ba1-ac3a-49cb46ff7334] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.002927388s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-695d88dc8c-4m9n7" [b86b4163-a325-486a-8258-77684c4c723e] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003416587s
helpers_test.go:175: Cleaning up "skaffold-547456" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-547456
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-547456: (2.699201625s)
--- PASS: TestSkaffold (95.50s)

                                                
                                    
x
+
TestInsufficientStorage (12.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-001623 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-001623 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.315262772s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"70ec6306-6b0f-4be5-b629-4292f42ca8e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-001623] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b0785d4-f92b-473c-b3f5-60a8609add9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19530"}}
	{"specversion":"1.0","id":"e00edd37-bffe-40a3-86b8-0581e8161b3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1597af0d-7908-4a7c-ac28-e462f9bf54b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig"}}
	{"specversion":"1.0","id":"7774849f-2d5f-4272-a4d6-df813d1fe285","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube"}}
	{"specversion":"1.0","id":"6995cd84-48cb-4398-a373-fd503ca003e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4e93d15e-dc4f-4bba-af69-299b214443b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"621c4429-5cc9-48f6-8ef8-c6feb2aa0f87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1d0cbc95-ac83-4e4c-8a74-b52dd5fd299a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"76359bdc-3098-464c-a952-3b7ed26f6819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8191e06a-6abd-48fc-9d2d-8f793ed6580f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"636cfe5a-d0e5-4251-9b95-81e84b2bb046","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-001623\" primary control-plane node in \"insufficient-storage-001623\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"33d17f66-11ef-4922-81e2-fb7b1c6f4c86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724862063-19530 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2caf0dd-25af-47dd-9db4-a96ab17bf63e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"02aeedf9-d1e3-4444-8dcb-4c0619f93b42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-001623 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-001623 --output=json --layout=cluster: exit status 7 (242.61151ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-001623","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-001623","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:38:31.460368  683541 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-001623" does not appear in /home/jenkins/minikube-integration/19530-418716/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-001623 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-001623 --output=json --layout=cluster: exit status 7 (241.359252ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-001623","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-001623","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 19:38:31.702219  683644 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-001623" does not appear in /home/jenkins/minikube-integration/19530-418716/kubeconfig
	E0829 19:38:31.712191  683644 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/insufficient-storage-001623/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-001623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-001623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-001623: (1.627020856s)
--- PASS: TestInsufficientStorage (12.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.354312207 start -p running-upgrade-629385 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.354312207 start -p running-upgrade-629385 --memory=2200 --vm-driver=docker  --container-runtime=docker: (26.918258196s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-629385 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-629385 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.211582835s)
helpers_test.go:175: Cleaning up "running-upgrade-629385" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-629385
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-629385: (2.082062472s)
--- PASS: TestRunningBinaryUpgrade (60.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (341.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-988919 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-988919 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (41.053066536s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-988919
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-988919: (10.671939794s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-988919 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-988919 status --format={{.Host}}: exit status 7 (63.963608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-988919 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-988919 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m27.665015416s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-988919 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-988919 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-988919 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (71.837969ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-988919] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-988919
	    minikube start -p kubernetes-upgrade-988919 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9889192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-988919 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-988919 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-988919 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.491989276s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-988919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-988919
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-988919: (2.313118369s)
--- PASS: TestKubernetesUpgrade (341.41s)

                                                
                                    
x
+
TestMissingContainerUpgrade (137.49s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3681500100 start -p missing-upgrade-966587 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3681500100 start -p missing-upgrade-966587 --memory=2200 --driver=docker  --container-runtime=docker: (1m10.073053931s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-966587
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-966587: (10.408255877s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-966587
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-966587 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-966587 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.486338113s)
helpers_test.go:175: Cleaning up "missing-upgrade-966587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-966587
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-966587: (2.078859084s)
--- PASS: TestMissingContainerUpgrade (137.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-469671 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-469671 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (77.095613ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-469671] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19530
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19530-418716/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19530-418716/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-469671 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-469671 --driver=docker  --container-runtime=docker: (34.42998797s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-469671 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-469671 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-469671 --no-kubernetes --driver=docker  --container-runtime=docker: (14.357427754s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-469671 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-469671 status -o json: exit status 2 (298.608377ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-469671","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-469671
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-469671: (1.730858831s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-469671 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-469671 --no-kubernetes --driver=docker  --container-runtime=docker: (8.933712782s)
--- PASS: TestNoKubernetes/serial/Start (8.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-469671 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-469671 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.288753ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.298242084s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-469671
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-469671: (1.185933378s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-469671 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-469671 --driver=docker  --container-runtime=docker: (7.481677548s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-469671 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-469671 "sudo systemctl is-active --quiet service kubelet": exit status 1 (300.57102ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (124.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1968374800 start -p stopped-upgrade-889327 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1968374800 start -p stopped-upgrade-889327 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m30.743008346s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1968374800 -p stopped-upgrade-889327 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1968374800 -p stopped-upgrade-889327 stop: (10.872966253s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-889327 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-889327 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.117727251s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (124.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-889327
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-889327: (1.157989572s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.16s)

                                                
                                    
x
+
TestPause/serial/Start (37.85s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-857473 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0829 19:42:28.263640  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-857473 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (37.853584019s)
--- PASS: TestPause/serial/Start (37.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m11.861453928s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (58.549842226s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-857473 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0829 19:43:07.197558  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:07.204008  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:07.216148  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:07.237498  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:07.279600  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:07.361373  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:07.522867  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:07.844291  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:08.486408  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:09.768434  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:12.330461  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:17.452661  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:43:27.694045  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-857473 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.617441689s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.63s)

                                                
                                    
x
+
TestPause/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-857473 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-857473 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-857473 --output=json --layout=cluster: exit status 2 (272.767286ms)

                                                
                                                
-- stdout --
	{"Name":"pause-857473","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-857473","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.42s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-857473 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.42s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-857473 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.1s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-857473 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-857473 --alsologtostderr -v=5: (2.097521633s)
--- PASS: TestPause/serial/DeletePaused (2.10s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-857473
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-857473: exit status 1 (17.143094ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-857473: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0829 19:43:48.175724  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (55.090096702s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-659208 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-659208 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-j7gkz" [971a0b75-d476-42cc-9442-9e3c752cf429] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-j7gkz" [971a0b75-d476-42cc-9442-9e3c752cf429] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003376312s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-prxxz" [0d2a59c5-3d09-4533-bad3-af4ebc45855b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005997967s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-659208 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-659208 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-659208 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8g6jd" [9a9df8ea-f033-4cd5-8861-659859faafac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8g6jd" [9a9df8ea-f033-4cd5-8861-659859faafac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003651567s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-659208 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (47.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0829 19:44:29.137337  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (47.238681413s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (47.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (64.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m4.521172347s)
--- PASS: TestNetworkPlugins/group/false/Start (64.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-lmb8d" [a43624ba-d5dd-403c-a36f-ebb6e67a3992] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004863778s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-659208 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-659208 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7gs4x" [1c6fc91f-45fc-4133-88f1-caca65782fe3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7gs4x" [1c6fc91f-45fc-4133-88f1-caca65782fe3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00504261s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-659208 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-659208 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-659208 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9qmq8" [e0779d10-c863-4ee4-9c14-5d7c9add5475] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9qmq8" [e0779d10-c863-4ee4-9c14-5d7c9add5475] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003526931s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m7.55848167s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-659208 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (49.004927535s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-659208 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-659208 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-68hhw" [ebf037c3-6bc9-4008-8ec9-e79be4eb93f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-68hhw" [ebf037c3-6bc9-4008-8ec9-e79be4eb93f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.005199328s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-659208 exec deployment/netcat -- nslookup kubernetes.default
E0829 19:45:51.059540  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (40.153007851s)
--- PASS: TestNetworkPlugins/group/bridge/Start (40.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (32.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-659208 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (32.06624475s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (32.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-659208 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-659208 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wccdd" [c9554edf-57e4-4bb8-960a-7082ad9ac215] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wccdd" [c9554edf-57e4-4bb8-960a-7082ad9ac215] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004717651s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tjgfv" [f1cf96b2-8933-43a8-8875-462032f5ab66] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004084104s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-659208 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-659208 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dpsvd" [43a02d61-b222-4d2d-8113-fd768b4be7ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dpsvd" [43a02d61-b222-4d2d-8113-fd768b4be7ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004274905s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-659208 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-659208 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tpk98" [01c891f1-3e47-490a-8193-294153dd2a0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tpk98" [01c891f1-3e47-490a-8193-294153dd2a0b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.003637378s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-659208 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-659208 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-659208 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fklrh" [6bc74a2f-030b-41dd-9a87-e040fb6d32ca] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fklrh" [6bc74a2f-030b-41dd-9a87-e040fb6d32ca] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004787118s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-659208 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-659208 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-659208 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-659208 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)
E0829 19:52:01.907845  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:03.525835  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:06.162290  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:09.061797  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:15.822672  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:24.008084  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:24.372399  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:28.263946  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:47.124222  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:50.023855  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:52.603190  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:52:56.784962  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-636020 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-636020 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m42.496971243s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (47.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-772045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-772045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (47.600539543s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (47.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (74.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-066882 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-066882 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m14.224204726s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (74.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-730557 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:47:23.757205  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:47:28.264299  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-730557 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m13.125125349s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (73.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-772045 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [14eafab2-5e13-43b4-8cb4-370d7910f58f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [14eafab2-5e13-43b4-8cb4-370d7910f58f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003600577s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-772045 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-772045 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-772045 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-772045 --alsologtostderr -v=3
E0829 19:48:07.197916  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-772045 --alsologtostderr -v=3: (10.737574206s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-772045 -n no-preload-772045
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-772045 -n no-preload-772045: exit status 7 (69.839265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-772045 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-772045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-772045 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m59.878257416s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-772045 -n no-preload-772045
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-066882 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f17c8619-c2ce-4fd9-8846-ea94aa984dcb] Pending
helpers_test.go:344: "busybox" [f17c8619-c2ce-4fd9-8846-ea94aa984dcb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f17c8619-c2ce-4fd9-8846-ea94aa984dcb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004028378s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-066882 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-730557 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d108417b-3b8b-41a3-a305-c90a985d99b8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d108417b-3b8b-41a3-a305-c90a985d99b8] Running
E0829 19:48:34.901135  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003304155s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-730557 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-066882 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-066882 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-066882 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-066882 --alsologtostderr -v=3: (10.815386179s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-730557 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-730557 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-730557 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-730557 --alsologtostderr -v=3: (10.948874977s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-066882 -n embed-certs-066882
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-066882 -n embed-certs-066882: exit status 7 (145.993688ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-066882 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-066882 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-066882 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m22.376424627s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-066882 -n embed-certs-066882
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-730557 -n default-k8s-diff-port-730557
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-730557 -n default-k8s-diff-port-730557: exit status 7 (61.333306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-730557 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-730557 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:48:51.760125  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:51.766615  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:51.778045  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:51.799450  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:51.841079  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:51.923347  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:52.084879  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:52.406592  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:53.047951  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:54.329725  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:56.879512  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:56.886696  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:56.891015  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:56.898891  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:56.921135  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:56.963468  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:57.045260  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:57.207245  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:57.528795  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:58.171101  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:48:59.453617  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:02.013170  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:02.019507  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:07.141587  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:12.254924  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:17.383903  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:20.687921  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/addons-505336/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:32.736363  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-730557 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m23.132474622s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-730557 -n default-k8s-diff-port-730557
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-636020 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5335ba84-4669-47e7-b894-8f231ae6c813] Pending
helpers_test.go:344: "busybox" [5335ba84-4669-47e7-b894-8f231ae6c813] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0829 19:49:37.865292  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [5335ba84-4669-47e7-b894-8f231ae6c813] Running
E0829 19:49:40.511867  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:40.518243  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:40.529614  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:40.550954  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:40.592336  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:40.673803  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:40.835790  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:41.157663  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:41.799851  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:43.081842  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004264816s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-636020 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-636020 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-636020 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-636020 --alsologtostderr -v=3
E0829 19:49:45.643853  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:49:50.765301  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-636020 --alsologtostderr -v=3: (10.760189811s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-636020 -n old-k8s-version-636020
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-636020 -n old-k8s-version-636020: exit status 7 (61.816788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-636020 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (24.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-636020 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0829 19:50:01.006948  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:08.743697  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:08.750082  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:08.761455  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:08.782877  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:08.824374  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:08.905820  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:09.067362  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:09.389559  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:10.031046  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:11.313037  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:13.698422  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:13.874506  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:18.827274  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:18.996187  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-636020 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (23.977760145s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-636020 -n old-k8s-version-636020
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (24.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (26.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0829 19:50:21.488359  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:29.238008  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:31.331248  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/functional-036671/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xhhrm" [1fca2f81-50ab-4abc-bb40-f852adf29782] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0829 19:50:39.969745  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:39.976245  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:39.987625  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:40.009010  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:40.050412  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:40.131917  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:40.293709  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:40.615545  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xhhrm" [1fca2f81-50ab-4abc-bb40-f852adf29782] Running
E0829 19:50:41.257475  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:42.539761  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:45.101811  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 26.003476705s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (26.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xhhrm" [1fca2f81-50ab-4abc-bb40-f852adf29782] Running
E0829 19:50:49.719460  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:50:50.223483  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00391339s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-636020 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-636020 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-636020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-636020 -n old-k8s-version-636020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-636020 -n old-k8s-version-636020: exit status 2 (273.834485ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-636020 -n old-k8s-version-636020
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-636020 -n old-k8s-version-636020: exit status 2 (280.326677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-636020 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-636020 -n old-k8s-version-636020
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-636020 -n old-k8s-version-636020
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-470130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:51:00.465068  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:02.450090  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/calico-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:20.946427  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/false-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:25.185712  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:25.192065  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:25.203440  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:25.224845  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:25.266656  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:25.348119  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:25.509634  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:25.831137  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:26.472455  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:27.754462  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-470130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (31.264646483s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-470130 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0829 19:51:28.084962  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:28.091389  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:28.102988  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:28.124402  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:28.165869  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:28.247345  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:28.409425  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:28.730919  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-470130 --alsologtostderr -v=3
E0829 19:51:29.372444  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:30.316767  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:30.654536  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:30.681077  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/custom-flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:33.216548  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:34.844825  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:34.851195  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:34.862549  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:34.883943  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:34.925351  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:35.006844  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:35.168635  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:35.438446  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:35.490933  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:35.620468  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/auto-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:36.133063  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:37.414737  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:38.337887  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-470130 --alsologtostderr -v=3: (10.915789503s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-470130 -n newest-cni-470130
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-470130 -n newest-cni-470130: exit status 7 (120.594384ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-470130 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0829 19:51:39.976617  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-470130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 19:51:40.749047  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kindnet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:43.031219  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:43.037595  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:43.048952  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:43.070308  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:43.111777  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:43.193228  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:43.354824  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:43.676120  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:44.317857  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:45.098320  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:45.600186  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:45.680644  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/enable-default-cni-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:48.162063  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:48.579466  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/flannel-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:51:53.284283  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-470130 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (14.288400051s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-470130 -n newest-cni-470130
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-470130 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-470130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-470130 -n newest-cni-470130
E0829 19:51:55.340599  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/bridge-659208/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-470130 -n newest-cni-470130: exit status 2 (279.63065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-470130 -n newest-cni-470130
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-470130 -n newest-cni-470130: exit status 2 (266.923039ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-470130 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-470130 -n newest-cni-470130
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-470130 -n newest-cni-470130
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tqdzh" [9d50d510-256e-4d88-8de5-e0b01e65f48f] Running
E0829 19:53:04.969990  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/kubenet-659208/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:53:07.198116  425496 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/skaffold-547456/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006061229s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tqdzh" [9d50d510-256e-4d88-8de5-e0b01e65f48f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003477887s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-066882 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wtcvb" [2be7ccee-bf2e-4ec5-94b4-4cc159170be6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003865073s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gjjsq" [28335407-2ffb-4d0b-8c1a-9953310831c1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004268898s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-066882 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-066882 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-066882 -n embed-certs-066882
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-066882 -n embed-certs-066882: exit status 2 (274.105224ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-066882 -n embed-certs-066882
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-066882 -n embed-certs-066882: exit status 2 (283.908209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-066882 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-066882 -n embed-certs-066882
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-066882 -n embed-certs-066882
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wtcvb" [2be7ccee-bf2e-4ec5-94b4-4cc159170be6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003913825s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-772045 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gjjsq" [28335407-2ffb-4d0b-8c1a-9953310831c1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003947004s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-730557 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-772045 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-772045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-772045 -n no-preload-772045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-772045 -n no-preload-772045: exit status 2 (281.852079ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-772045 -n no-preload-772045
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-772045 -n no-preload-772045: exit status 2 (302.311459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-772045 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-772045 -n no-preload-772045
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-772045 -n no-preload-772045
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-730557 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-730557 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-730557 -n default-k8s-diff-port-730557
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-730557 -n default-k8s-diff-port-730557: exit status 2 (278.402697ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-730557 -n default-k8s-diff-port-730557
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-730557 -n default-k8s-diff-port-730557: exit status 2 (300.937448ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-730557 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-730557 -n default-k8s-diff-port-730557
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-730557 -n default-k8s-diff-port-730557
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-659208 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-659208" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19530-418716/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 19:39:06 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.103.2:8443
name: NoKubernetes-469671
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19530-418716/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 19:39:01 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-566314
contexts:
- context:
cluster: NoKubernetes-469671
extensions:
- extension:
last-update: Thu, 29 Aug 2024 19:39:06 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: NoKubernetes-469671
name: NoKubernetes-469671
- context:
cluster: cert-expiration-566314
extensions:
- extension:
last-update: Thu, 29 Aug 2024 19:39:01 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: cert-expiration-566314
name: cert-expiration-566314
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-469671
user:
client-certificate: /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/NoKubernetes-469671/client.crt
client-key: /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/NoKubernetes-469671/client.key
- name: cert-expiration-566314
user:
client-certificate: /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/cert-expiration-566314/client.crt
client-key: /home/jenkins/minikube-integration/19530-418716/.minikube/profiles/cert-expiration-566314/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-659208

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-659208" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-659208"

                                                
                                                
----------------------- debugLogs end: cilium-659208 [took: 3.150434277s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-659208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-659208
--- SKIP: TestNetworkPlugins/group/cilium (3.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-495319" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-495319
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard