Test Report: Docker_Linux 19531

                    
                      cca1ca437c91fbc205ce13fbbdef95295053f0ce:2024-08-29:35997
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 72.45
x
+
TestAddons/parallel/Registry (72.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.86822ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-zc54m" [a3caeea3-7234-42cc-b0fb-1182264d0d96] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004231674s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vvcrx" [b379348e-09dc-44aa-8751-a98fd763a638] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002841869s
addons_test.go:342: (dbg) Run:  kubectl --context addons-653578 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-653578 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-653578 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.071377474s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-653578 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 ip
2024/08/29 18:19:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-653578
helpers_test.go:235: (dbg) docker inspect addons-653578:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b5d25a5f2b2ff1c20fbb005510e5f97ffb10bddf7b735b6d9e9d70b74d082e9",
	        "Created": "2024-08-29T18:06:35.86348459Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 21899,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-29T18:06:35.989640144Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:33319d96a2f78fe466b6d8cbd88671515fca2b1eded3ce0b5f6d545b670a78ac",
	        "ResolvConfPath": "/var/lib/docker/containers/9b5d25a5f2b2ff1c20fbb005510e5f97ffb10bddf7b735b6d9e9d70b74d082e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b5d25a5f2b2ff1c20fbb005510e5f97ffb10bddf7b735b6d9e9d70b74d082e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b5d25a5f2b2ff1c20fbb005510e5f97ffb10bddf7b735b6d9e9d70b74d082e9/hosts",
	        "LogPath": "/var/lib/docker/containers/9b5d25a5f2b2ff1c20fbb005510e5f97ffb10bddf7b735b6d9e9d70b74d082e9/9b5d25a5f2b2ff1c20fbb005510e5f97ffb10bddf7b735b6d9e9d70b74d082e9-json.log",
	        "Name": "/addons-653578",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-653578:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-653578",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e48a16b2d3a064a108428ba8d1cec924e3df37049744c3fc91a2cc826bcb8ea9-init/diff:/var/lib/docker/overlay2/b0cb58c7da4ca56493a2d513748b8b5f30c3c01c477868a0629adf5750a8f1ca/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e48a16b2d3a064a108428ba8d1cec924e3df37049744c3fc91a2cc826bcb8ea9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e48a16b2d3a064a108428ba8d1cec924e3df37049744c3fc91a2cc826bcb8ea9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e48a16b2d3a064a108428ba8d1cec924e3df37049744c3fc91a2cc826bcb8ea9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-653578",
	                "Source": "/var/lib/docker/volumes/addons-653578/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-653578",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-653578",
	                "name.minikube.sigs.k8s.io": "addons-653578",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "362f6da8a28ebf88e65238bf9d3663e858beb7ab90c8720f25fa7fb36335d672",
	            "SandboxKey": "/var/run/docker/netns/362f6da8a28e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-653578": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9c6706fdf5ee0e0039c5165f74993e527bd1194bfdee992154512b94fe7fc97b",
	                    "EndpointID": "de4f7f6264f11640902c8a3b50a692b3202ea9bf9a2aaf1d516d97594855aee1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-653578",
	                        "9b5d25a5f2b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-653578 -n addons-653578
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-docker-168864                                                                   | download-docker-168864 | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-314184   | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | binary-mirror-314184                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39569                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-314184                                                                     | binary-mirror-314184   | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:06 UTC |
	| addons  | enable dashboard -p                                                                         | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-653578                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC |                     |
	|         | addons-653578                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-653578 --wait=true                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:06 UTC | 29 Aug 24 18:09 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                        |         |         |                     |                     |
	| addons  | addons-653578 addons disable                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:10 UTC | 29 Aug 24 18:10 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | addons-653578                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-653578 ssh cat                                                                       | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | /opt/local-path-provisioner/pvc-39f4fb16-19c9-473f-b1e5-a21f836e005c_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-653578 addons disable                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:19 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-653578 addons disable                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-653578 addons                                                                        | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:18 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:18 UTC | 29 Aug 24 18:19 UTC |
	|         | addons-653578                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-653578 ssh curl -s                                                                   | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-653578 ip                                                                            | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	| addons  | addons-653578 addons disable                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-653578 addons                                                                        | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-653578 addons disable                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-653578 addons                                                                        | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | -p addons-653578                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-653578 addons disable                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | -p addons-653578                                                                            |                        |         |         |                     |                     |
	| addons  | addons-653578 addons disable                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC |                     |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-653578 ip                                                                            | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	| addons  | addons-653578 addons disable                                                                | addons-653578          | jenkins | v1.33.1 | 29 Aug 24 18:19 UTC | 29 Aug 24 18:19 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:06:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:06:14.541206   21143 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:06:14.541310   21143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:14.541320   21143 out.go:358] Setting ErrFile to fd 2...
	I0829 18:06:14.541326   21143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:06:14.541521   21143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
	I0829 18:06:14.542143   21143 out.go:352] Setting JSON to false
	I0829 18:06:14.542973   21143 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2921,"bootTime":1724951854,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:06:14.543028   21143 start.go:139] virtualization: kvm guest
	I0829 18:06:14.545359   21143 out.go:177] * [addons-653578] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:06:14.546971   21143 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:06:14.547033   21143 notify.go:220] Checking for updates...
	I0829 18:06:14.549601   21143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:06:14.550968   21143 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	I0829 18:06:14.552624   21143 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	I0829 18:06:14.553964   21143 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:06:14.555175   21143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:06:14.556515   21143 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:06:14.578111   21143 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:06:14.578268   21143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:06:14.622866   21143 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:06:14.614455252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:06:14.622964   21143 docker.go:307] overlay module found
	I0829 18:06:14.624914   21143 out.go:177] * Using the docker driver based on user configuration
	I0829 18:06:14.626319   21143 start.go:297] selected driver: docker
	I0829 18:06:14.626332   21143 start.go:901] validating driver "docker" against <nil>
	I0829 18:06:14.626346   21143 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:06:14.627061   21143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:06:14.672110   21143 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:06:14.663526691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:06:14.672260   21143 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:06:14.672453   21143 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:06:14.674215   21143 out.go:177] * Using Docker driver with root privileges
	I0829 18:06:14.675742   21143 cni.go:84] Creating CNI manager for ""
	I0829 18:06:14.675761   21143 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:14.675777   21143 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:06:14.675832   21143 start.go:340] cluster config:
	{Name:addons-653578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-653578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:14.677048   21143 out.go:177] * Starting "addons-653578" primary control-plane node in "addons-653578" cluster
	I0829 18:06:14.678237   21143 cache.go:121] Beginning downloading kic base image for docker with docker
	I0829 18:06:14.679791   21143 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0829 18:06:14.681197   21143 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:14.681228   21143 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0829 18:06:14.681233   21143 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19531-12929/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0829 18:06:14.681259   21143 cache.go:56] Caching tarball of preloaded images
	I0829 18:06:14.681345   21143 preload.go:172] Found /home/jenkins/minikube-integration/19531-12929/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0829 18:06:14.681358   21143 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 18:06:14.681636   21143 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/config.json ...
	I0829 18:06:14.681656   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/config.json: {Name:mkc9bc1d1ab914bc4ce127ac4daa1e89414b2b75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:14.696655   21143 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:06:14.696764   21143 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0829 18:06:14.696803   21143 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0829 18:06:14.696811   21143 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0829 18:06:14.696817   21143 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0829 18:06:14.696822   21143 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0829 18:06:26.622202   21143 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0829 18:06:26.622244   21143 cache.go:194] Successfully downloaded all kic artifacts
	I0829 18:06:26.622279   21143 start.go:360] acquireMachinesLock for addons-653578: {Name:mkde356e0ad4af86e0ed736075961983d5119708 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 18:06:26.622369   21143 start.go:364] duration metric: took 71.082µs to acquireMachinesLock for "addons-653578"
	I0829 18:06:26.622390   21143 start.go:93] Provisioning new machine with config: &{Name:addons-653578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-653578 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:06:26.622461   21143 start.go:125] createHost starting for "" (driver="docker")
	I0829 18:06:26.624288   21143 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0829 18:06:26.624487   21143 start.go:159] libmachine.API.Create for "addons-653578" (driver="docker")
	I0829 18:06:26.624518   21143 client.go:168] LocalClient.Create starting
	I0829 18:06:26.624605   21143 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19531-12929/.minikube/certs/ca.pem
	I0829 18:06:26.772678   21143 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19531-12929/.minikube/certs/cert.pem
	I0829 18:06:26.963341   21143 cli_runner.go:164] Run: docker network inspect addons-653578 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0829 18:06:26.978779   21143 cli_runner.go:211] docker network inspect addons-653578 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0829 18:06:26.978851   21143 network_create.go:284] running [docker network inspect addons-653578] to gather additional debugging logs...
	I0829 18:06:26.978872   21143 cli_runner.go:164] Run: docker network inspect addons-653578
	W0829 18:06:26.993354   21143 cli_runner.go:211] docker network inspect addons-653578 returned with exit code 1
	I0829 18:06:26.993396   21143 network_create.go:287] error running [docker network inspect addons-653578]: docker network inspect addons-653578: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-653578 not found
	I0829 18:06:26.993408   21143 network_create.go:289] output of [docker network inspect addons-653578]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-653578 not found
	
	** /stderr **
	I0829 18:06:26.993540   21143 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:06:27.008915   21143 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a907a0}
	I0829 18:06:27.008957   21143 network_create.go:124] attempt to create docker network addons-653578 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0829 18:06:27.008998   21143 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-653578 addons-653578
	I0829 18:06:27.066544   21143 network_create.go:108] docker network addons-653578 192.168.49.0/24 created
	I0829 18:06:27.066571   21143 kic.go:121] calculated static IP "192.168.49.2" for the "addons-653578" container
	I0829 18:06:27.066627   21143 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0829 18:06:27.081026   21143 cli_runner.go:164] Run: docker volume create addons-653578 --label name.minikube.sigs.k8s.io=addons-653578 --label created_by.minikube.sigs.k8s.io=true
	I0829 18:06:27.097145   21143 oci.go:103] Successfully created a docker volume addons-653578
	I0829 18:06:27.097245   21143 cli_runner.go:164] Run: docker run --rm --name addons-653578-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-653578 --entrypoint /usr/bin/test -v addons-653578:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
	I0829 18:06:31.883940   21143 cli_runner.go:217] Completed: docker run --rm --name addons-653578-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-653578 --entrypoint /usr/bin/test -v addons-653578:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib: (4.786657512s)
	I0829 18:06:31.883970   21143 oci.go:107] Successfully prepared a docker volume addons-653578
	I0829 18:06:31.883996   21143 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:31.884015   21143 kic.go:194] Starting extracting preloaded images to volume ...
	I0829 18:06:31.884063   21143 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-12929/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-653578:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
	I0829 18:06:35.799765   21143 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19531-12929/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-653578:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (3.915670631s)
	I0829 18:06:35.799795   21143 kic.go:203] duration metric: took 3.915776722s to extract preloaded images to volume ...
	W0829 18:06:35.799925   21143 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0829 18:06:35.800033   21143 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0829 18:06:35.848076   21143 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-653578 --name addons-653578 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-653578 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-653578 --network addons-653578 --ip 192.168.49.2 --volume addons-653578:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
	I0829 18:06:36.136039   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Running}}
	I0829 18:06:36.155576   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:36.174738   21143 cli_runner.go:164] Run: docker exec addons-653578 stat /var/lib/dpkg/alternatives/iptables
	I0829 18:06:36.215246   21143 oci.go:144] the created container "addons-653578" has a running status.
	I0829 18:06:36.215275   21143 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa...
	I0829 18:06:36.268331   21143 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0829 18:06:36.287366   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:36.303599   21143 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0829 18:06:36.303624   21143 kic_runner.go:114] Args: [docker exec --privileged addons-653578 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0829 18:06:36.346377   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:36.368467   21143 machine.go:93] provisionDockerMachine start ...
	I0829 18:06:36.368571   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:36.384266   21143 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:36.384533   21143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:36.384551   21143 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 18:06:36.385143   21143 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56286->127.0.0.1:32768: read: connection reset by peer
	I0829 18:06:39.512862   21143 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-653578
	
	I0829 18:06:39.512886   21143 ubuntu.go:169] provisioning hostname "addons-653578"
	I0829 18:06:39.512934   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:39.529115   21143 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:39.529273   21143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:39.529286   21143 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-653578 && echo "addons-653578" | sudo tee /etc/hostname
	I0829 18:06:39.663616   21143 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-653578
	
	I0829 18:06:39.663695   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:39.679259   21143 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:39.679432   21143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:39.679449   21143 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-653578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-653578/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-653578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 18:06:39.805317   21143 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 18:06:39.805343   21143 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19531-12929/.minikube CaCertPath:/home/jenkins/minikube-integration/19531-12929/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19531-12929/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19531-12929/.minikube}
	I0829 18:06:39.805377   21143 ubuntu.go:177] setting up certificates
	I0829 18:06:39.805393   21143 provision.go:84] configureAuth start
	I0829 18:06:39.805447   21143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-653578
	I0829 18:06:39.822187   21143 provision.go:143] copyHostCerts
	I0829 18:06:39.822266   21143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-12929/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19531-12929/.minikube/ca.pem (1078 bytes)
	I0829 18:06:39.822380   21143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-12929/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19531-12929/.minikube/cert.pem (1123 bytes)
	I0829 18:06:39.822479   21143 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19531-12929/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19531-12929/.minikube/key.pem (1675 bytes)
	I0829 18:06:39.822529   21143 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19531-12929/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19531-12929/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19531-12929/.minikube/certs/ca-key.pem org=jenkins.addons-653578 san=[127.0.0.1 192.168.49.2 addons-653578 localhost minikube]
	I0829 18:06:39.897152   21143 provision.go:177] copyRemoteCerts
	I0829 18:06:39.897206   21143 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 18:06:39.897236   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:39.913841   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:40.006348   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0829 18:06:40.026701   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 18:06:40.046723   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 18:06:40.066833   21143 provision.go:87] duration metric: took 261.425383ms to configureAuth
	I0829 18:06:40.066862   21143 ubuntu.go:193] setting minikube options for container-runtime
	I0829 18:06:40.067021   21143 config.go:182] Loaded profile config "addons-653578": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:40.067067   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:40.083631   21143 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:40.083850   21143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:40.083866   21143 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0829 18:06:40.205898   21143 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0829 18:06:40.205922   21143 ubuntu.go:71] root file system type: overlay
	I0829 18:06:40.206032   21143 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0829 18:06:40.206093   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:40.222592   21143 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:40.222766   21143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:40.222827   21143 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0829 18:06:40.355868   21143 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0829 18:06:40.355948   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:40.372899   21143 main.go:141] libmachine: Using SSH client type: native
	I0829 18:06:40.373101   21143 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0829 18:06:40.373127   21143 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0829 18:06:41.030378   21143 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-12 11:48:57.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-29 18:06:40.350494093 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0829 18:06:41.030411   21143 machine.go:96] duration metric: took 4.661912301s to provisionDockerMachine
	I0829 18:06:41.030422   21143 client.go:171] duration metric: took 14.405896786s to LocalClient.Create
	I0829 18:06:41.030447   21143 start.go:167] duration metric: took 14.40595299s to libmachine.API.Create "addons-653578"
	I0829 18:06:41.030458   21143 start.go:293] postStartSetup for "addons-653578" (driver="docker")
	I0829 18:06:41.030469   21143 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 18:06:41.030516   21143 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 18:06:41.030547   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:41.047020   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:41.138296   21143 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 18:06:41.141042   21143 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0829 18:06:41.141073   21143 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0829 18:06:41.141082   21143 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0829 18:06:41.141090   21143 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0829 18:06:41.141119   21143 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-12929/.minikube/addons for local assets ...
	I0829 18:06:41.141182   21143 filesync.go:126] Scanning /home/jenkins/minikube-integration/19531-12929/.minikube/files for local assets ...
	I0829 18:06:41.141216   21143 start.go:296] duration metric: took 110.7494ms for postStartSetup
	I0829 18:06:41.141514   21143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-653578
	I0829 18:06:41.157783   21143 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/config.json ...
	I0829 18:06:41.158134   21143 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:06:41.158188   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:41.174073   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:41.262048   21143 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0829 18:06:41.265795   21143 start.go:128] duration metric: took 14.643322851s to createHost
	I0829 18:06:41.265814   21143 start.go:83] releasing machines lock for "addons-653578", held for 14.643434166s
	I0829 18:06:41.265876   21143 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-653578
	I0829 18:06:41.281621   21143 ssh_runner.go:195] Run: cat /version.json
	I0829 18:06:41.281689   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:41.281690   21143 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 18:06:41.281883   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:41.297879   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:41.298901   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:41.385002   21143 ssh_runner.go:195] Run: systemctl --version
	I0829 18:06:41.389051   21143 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0829 18:06:41.455576   21143 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0829 18:06:41.476669   21143 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0829 18:06:41.476731   21143 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 18:06:41.499740   21143 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0829 18:06:41.499764   21143 start.go:495] detecting cgroup driver to use...
	I0829 18:06:41.499794   21143 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:06:41.499891   21143 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:41.513358   21143 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0829 18:06:41.522345   21143 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 18:06:41.531054   21143 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0829 18:06:41.531112   21143 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0829 18:06:41.539574   21143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:06:41.547768   21143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 18:06:41.556031   21143 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 18:06:41.564356   21143 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 18:06:41.572105   21143 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 18:06:41.580258   21143 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 18:06:41.588359   21143 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 18:06:41.596476   21143 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 18:06:41.603434   21143 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 18:06:41.610304   21143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:41.678939   21143 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0829 18:06:41.765564   21143 start.go:495] detecting cgroup driver to use...
	I0829 18:06:41.765609   21143 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0829 18:06:41.765657   21143 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0829 18:06:41.776242   21143 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0829 18:06:41.776304   21143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 18:06:41.786128   21143 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 18:06:41.800820   21143 ssh_runner.go:195] Run: which cri-dockerd
	I0829 18:06:41.803815   21143 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 18:06:41.811670   21143 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0829 18:06:41.828393   21143 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0829 18:06:41.926610   21143 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0829 18:06:42.000621   21143 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0829 18:06:42.000773   21143 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0829 18:06:42.022756   21143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:42.103233   21143 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 18:06:42.344577   21143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 18:06:42.354973   21143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:06:42.364955   21143 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0829 18:06:42.439712   21143 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0829 18:06:42.515805   21143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:42.595721   21143 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0829 18:06:42.607505   21143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 18:06:42.617102   21143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:42.688840   21143 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0829 18:06:42.746660   21143 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 18:06:42.746741   21143 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0829 18:06:42.750758   21143 start.go:563] Will wait 60s for crictl version
	I0829 18:06:42.750804   21143 ssh_runner.go:195] Run: which crictl
	I0829 18:06:42.753757   21143 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 18:06:42.783784   21143 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0829 18:06:42.783835   21143 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:06:42.805888   21143 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 18:06:42.829594   21143 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0829 18:06:42.829664   21143 cli_runner.go:164] Run: docker network inspect addons-653578 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0829 18:06:42.845597   21143 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0829 18:06:42.848852   21143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:42.858290   21143 kubeadm.go:883] updating cluster {Name:addons-653578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-653578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 18:06:42.858394   21143 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:42.858458   21143 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:06:42.875415   21143 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 18:06:42.875445   21143 docker.go:615] Images already preloaded, skipping extraction
	I0829 18:06:42.875507   21143 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 18:06:42.892756   21143 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 18:06:42.892776   21143 cache_images.go:84] Images are preloaded, skipping loading
	I0829 18:06:42.892793   21143 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0829 18:06:42.892885   21143 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-653578 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-653578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 18:06:42.892931   21143 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0829 18:06:42.935186   21143 cni.go:84] Creating CNI manager for ""
	I0829 18:06:42.935210   21143 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:42.935229   21143 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 18:06:42.935250   21143 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-653578 NodeName:addons-653578 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 18:06:42.935366   21143 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-653578"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 18:06:42.935413   21143 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 18:06:42.943215   21143 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 18:06:42.943263   21143 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 18:06:42.950830   21143 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0829 18:06:42.966180   21143 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 18:06:42.981277   21143 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0829 18:06:42.996424   21143 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0829 18:06:42.999400   21143 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 18:06:43.008875   21143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:43.080374   21143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:43.092346   21143 certs.go:68] Setting up /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578 for IP: 192.168.49.2
	I0829 18:06:43.092366   21143 certs.go:194] generating shared ca certs ...
	I0829 18:06:43.092380   21143 certs.go:226] acquiring lock for ca certs: {Name:mk3a8b5f8fc59dccfdb89bf93e783fa1d162205f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:43.092483   21143 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19531-12929/.minikube/ca.key
	I0829 18:06:43.605427   21143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-12929/.minikube/ca.crt ...
	I0829 18:06:43.605462   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/ca.crt: {Name:mk8287478ce66aa31380c1c20163afdae4976f4c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:43.605625   21143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-12929/.minikube/ca.key ...
	I0829 18:06:43.605635   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/ca.key: {Name:mk920c01c799b09744d0c246aaa262d70ae0b499 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:43.605720   21143 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19531-12929/.minikube/proxy-client-ca.key
	I0829 18:06:43.819164   21143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-12929/.minikube/proxy-client-ca.crt ...
	I0829 18:06:43.819192   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/proxy-client-ca.crt: {Name:mkf9ee6d7ba722f0eb4b4dc575c1f10d03b13469 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:43.819345   21143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-12929/.minikube/proxy-client-ca.key ...
	I0829 18:06:43.819355   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/proxy-client-ca.key: {Name:mkb78102f658a3ef377795833225e7c80244f3d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:43.819417   21143 certs.go:256] generating profile certs ...
	I0829 18:06:43.819474   21143 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.key
	I0829 18:06:43.819488   21143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt with IP's: []
	I0829 18:06:43.936538   21143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt ...
	I0829 18:06:43.936568   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: {Name:mk20820ace9b6b579490f45ef77079e683065654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:43.936724   21143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.key ...
	I0829 18:06:43.936732   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.key: {Name:mka5db5a26fa4841fa112e7084a08b5de886d343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:43.936798   21143 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.key.646cf1dc
	I0829 18:06:43.936815   21143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.crt.646cf1dc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0829 18:06:44.236395   21143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.crt.646cf1dc ...
	I0829 18:06:44.236428   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.crt.646cf1dc: {Name:mkd818ca95ef1394082926fc67a403e3f15ed629 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:44.236591   21143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.key.646cf1dc ...
	I0829 18:06:44.236604   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.key.646cf1dc: {Name:mk0865ebbc0763782afd26444c860d5f9034281e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:44.236673   21143 certs.go:381] copying /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.crt.646cf1dc -> /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.crt
	I0829 18:06:44.236757   21143 certs.go:385] copying /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.key.646cf1dc -> /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.key
	I0829 18:06:44.236808   21143 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/proxy-client.key
	I0829 18:06:44.236826   21143 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/proxy-client.crt with IP's: []
	I0829 18:06:44.361196   21143 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/proxy-client.crt ...
	I0829 18:06:44.361224   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/proxy-client.crt: {Name:mka4b40745ad561c9ee04e09a4a4fde1ef727f0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:44.361372   21143 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/proxy-client.key ...
	I0829 18:06:44.361383   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/proxy-client.key: {Name:mkbb640ca47dcbd172d12d27f3ea5e8b143d0cea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:44.361547   21143 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-12929/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 18:06:44.361578   21143 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-12929/.minikube/certs/ca.pem (1078 bytes)
	I0829 18:06:44.361604   21143 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-12929/.minikube/certs/cert.pem (1123 bytes)
	I0829 18:06:44.361625   21143 certs.go:484] found cert: /home/jenkins/minikube-integration/19531-12929/.minikube/certs/key.pem (1675 bytes)
	I0829 18:06:44.362211   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 18:06:44.383583   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 18:06:44.403561   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 18:06:44.424057   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0829 18:06:44.443543   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 18:06:44.462593   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 18:06:44.482047   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 18:06:44.501588   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 18:06:44.520705   21143 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19531-12929/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 18:06:44.540149   21143 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 18:06:44.555040   21143 ssh_runner.go:195] Run: openssl version
	I0829 18:06:44.559834   21143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 18:06:44.567737   21143 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:44.570621   21143 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:06 /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:44.570661   21143 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 18:06:44.576396   21143 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 18:06:44.584313   21143 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 18:06:44.587140   21143 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 18:06:44.587188   21143 kubeadm.go:392] StartCluster: {Name:addons-653578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-653578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:44.587296   21143 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 18:06:44.602703   21143 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 18:06:44.610166   21143 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 18:06:44.617300   21143 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0829 18:06:44.617335   21143 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 18:06:44.624327   21143 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 18:06:44.624344   21143 kubeadm.go:157] found existing configuration files:
	
	I0829 18:06:44.624381   21143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 18:06:44.631630   21143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 18:06:44.631677   21143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 18:06:44.638750   21143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 18:06:44.645608   21143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 18:06:44.645656   21143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 18:06:44.652299   21143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 18:06:44.659253   21143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 18:06:44.659285   21143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 18:06:44.665956   21143 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 18:06:44.672719   21143 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 18:06:44.672752   21143 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 18:06:44.679363   21143 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0829 18:06:44.712144   21143 kubeadm.go:310] W0829 18:06:44.711554    1924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:44.712683   21143 kubeadm.go:310] W0829 18:06:44.712203    1924 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 18:06:44.734520   21143 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-gcp\n", err: exit status 1
	I0829 18:06:44.785186   21143 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 18:06:54.034649   21143 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 18:06:54.034719   21143 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 18:06:54.034823   21143 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0829 18:06:54.034935   21143 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-gcp
	I0829 18:06:54.035002   21143 kubeadm.go:310] OS: Linux
	I0829 18:06:54.035068   21143 kubeadm.go:310] CGROUPS_CPU: enabled
	I0829 18:06:54.035142   21143 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0829 18:06:54.035188   21143 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0829 18:06:54.035238   21143 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0829 18:06:54.035310   21143 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0829 18:06:54.035390   21143 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0829 18:06:54.035433   21143 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0829 18:06:54.035481   21143 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0829 18:06:54.035520   21143 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0829 18:06:54.035583   21143 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 18:06:54.035664   21143 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 18:06:54.035738   21143 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 18:06:54.035792   21143 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 18:06:54.037775   21143 out.go:235]   - Generating certificates and keys ...
	I0829 18:06:54.037871   21143 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 18:06:54.037963   21143 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 18:06:54.038062   21143 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 18:06:54.038147   21143 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 18:06:54.038238   21143 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 18:06:54.038309   21143 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 18:06:54.038379   21143 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 18:06:54.038537   21143 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-653578 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:06:54.038614   21143 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 18:06:54.038761   21143 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-653578 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0829 18:06:54.038859   21143 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 18:06:54.038961   21143 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 18:06:54.039029   21143 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 18:06:54.039143   21143 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 18:06:54.039217   21143 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 18:06:54.039317   21143 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 18:06:54.039402   21143 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 18:06:54.039460   21143 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 18:06:54.039541   21143 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 18:06:54.039644   21143 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 18:06:54.039703   21143 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 18:06:54.041306   21143 out.go:235]   - Booting up control plane ...
	I0829 18:06:54.041392   21143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 18:06:54.041510   21143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 18:06:54.041594   21143 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 18:06:54.041763   21143 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 18:06:54.041886   21143 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 18:06:54.041951   21143 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 18:06:54.042062   21143 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 18:06:54.042149   21143 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 18:06:54.042196   21143 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.533252ms
	I0829 18:06:54.042265   21143 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 18:06:54.042334   21143 kubeadm.go:310] [api-check] The API server is healthy after 4.501116431s
	I0829 18:06:54.042453   21143 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 18:06:54.042561   21143 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 18:06:54.042610   21143 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 18:06:54.042757   21143 kubeadm.go:310] [mark-control-plane] Marking the node addons-653578 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 18:06:54.042807   21143 kubeadm.go:310] [bootstrap-token] Using token: qm44pf.7w57d12tfpnnlv0n
	I0829 18:06:54.044371   21143 out.go:235]   - Configuring RBAC rules ...
	I0829 18:06:54.044475   21143 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 18:06:54.044547   21143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 18:06:54.044702   21143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 18:06:54.044886   21143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 18:06:54.045051   21143 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 18:06:54.045150   21143 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 18:06:54.045295   21143 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 18:06:54.045345   21143 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 18:06:54.045391   21143 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 18:06:54.045402   21143 kubeadm.go:310] 
	I0829 18:06:54.045492   21143 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 18:06:54.045509   21143 kubeadm.go:310] 
	I0829 18:06:54.045597   21143 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 18:06:54.045612   21143 kubeadm.go:310] 
	I0829 18:06:54.045649   21143 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 18:06:54.045749   21143 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 18:06:54.045808   21143 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 18:06:54.045815   21143 kubeadm.go:310] 
	I0829 18:06:54.045881   21143 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 18:06:54.045890   21143 kubeadm.go:310] 
	I0829 18:06:54.045951   21143 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 18:06:54.045959   21143 kubeadm.go:310] 
	I0829 18:06:54.046032   21143 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 18:06:54.046146   21143 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 18:06:54.046247   21143 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 18:06:54.046260   21143 kubeadm.go:310] 
	I0829 18:06:54.046385   21143 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 18:06:54.046501   21143 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 18:06:54.046517   21143 kubeadm.go:310] 
	I0829 18:06:54.046600   21143 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token qm44pf.7w57d12tfpnnlv0n \
	I0829 18:06:54.046736   21143 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:219d1827506d3dc0c0e2b4d0df8adb2f75d9334fe1e37c771e23c54a1299cdc1 \
	I0829 18:06:54.046771   21143 kubeadm.go:310] 	--control-plane 
	I0829 18:06:54.046784   21143 kubeadm.go:310] 
	I0829 18:06:54.046868   21143 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 18:06:54.046878   21143 kubeadm.go:310] 
	I0829 18:06:54.046941   21143 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token qm44pf.7w57d12tfpnnlv0n \
	I0829 18:06:54.047065   21143 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:219d1827506d3dc0c0e2b4d0df8adb2f75d9334fe1e37c771e23c54a1299cdc1 
	I0829 18:06:54.047078   21143 cni.go:84] Creating CNI manager for ""
	I0829 18:06:54.047097   21143 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:54.048542   21143 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 18:06:54.049627   21143 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 18:06:54.057657   21143 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 18:06:54.075099   21143 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 18:06:54.075176   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.075199   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-653578 minikube.k8s.io/updated_at=2024_08_29T18_06_54_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-653578 minikube.k8s.io/primary=true
	I0829 18:06:54.081603   21143 ops.go:34] apiserver oom_adj: -16
	I0829 18:06:54.137459   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:54.638395   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.138045   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:55.637532   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.138097   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:56.637832   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:57.138242   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:57.638030   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:58.137999   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:58.638305   21143 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 18:06:58.703528   21143 kubeadm.go:1113] duration metric: took 4.628409817s to wait for elevateKubeSystemPrivileges
	I0829 18:06:58.703563   21143 kubeadm.go:394] duration metric: took 14.116379095s to StartCluster
	I0829 18:06:58.703592   21143 settings.go:142] acquiring lock: {Name:mk446578310f663dfd6dffb428d0b3c44e8559e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:58.703729   21143 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19531-12929/kubeconfig
	I0829 18:06:58.704194   21143 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/kubeconfig: {Name:mk461465f41d3b241fd5ffaa8bbead78414bb970 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:06:58.704398   21143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 18:06:58.704421   21143 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 18:06:58.704506   21143 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 18:06:58.704600   21143 config.go:182] Loaded profile config "addons-653578": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:58.704657   21143 addons.go:69] Setting metrics-server=true in profile "addons-653578"
	I0829 18:06:58.704652   21143 addons.go:69] Setting cloud-spanner=true in profile "addons-653578"
	I0829 18:06:58.704642   21143 addons.go:69] Setting default-storageclass=true in profile "addons-653578"
	I0829 18:06:58.704665   21143 addons.go:69] Setting ingress=true in profile "addons-653578"
	I0829 18:06:58.704685   21143 addons.go:69] Setting ingress-dns=true in profile "addons-653578"
	I0829 18:06:58.704697   21143 addons.go:234] Setting addon ingress=true in "addons-653578"
	I0829 18:06:58.704696   21143 addons.go:69] Setting inspektor-gadget=true in profile "addons-653578"
	I0829 18:06:58.704709   21143 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-653578"
	I0829 18:06:58.704714   21143 addons.go:234] Setting addon inspektor-gadget=true in "addons-653578"
	I0829 18:06:58.704727   21143 addons.go:234] Setting addon ingress-dns=true in "addons-653578"
	I0829 18:06:58.704746   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.704752   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.704765   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.704889   21143 addons.go:69] Setting storage-provisioner=true in profile "addons-653578"
	I0829 18:06:58.704913   21143 addons.go:234] Setting addon storage-provisioner=true in "addons-653578"
	I0829 18:06:58.704937   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.705098   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.705153   21143 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-653578"
	I0829 18:06:58.705180   21143 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-653578"
	I0829 18:06:58.705204   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.705266   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.705271   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.705302   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.705382   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.704691   21143 addons.go:234] Setting addon cloud-spanner=true in "addons-653578"
	I0829 18:06:58.705509   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.704660   21143 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-653578"
	I0829 18:06:58.705616   21143 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-653578"
	I0829 18:06:58.705648   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.705921   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.705978   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.706083   21143 addons.go:69] Setting yakd=true in profile "addons-653578"
	I0829 18:06:58.706099   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.706137   21143 addons.go:234] Setting addon yakd=true in "addons-653578"
	I0829 18:06:58.706199   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.706684   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.707837   21143 addons.go:69] Setting registry=true in profile "addons-653578"
	I0829 18:06:58.707874   21143 addons.go:234] Setting addon registry=true in "addons-653578"
	I0829 18:06:58.707907   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.708379   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.711020   21143 out.go:177] * Verifying Kubernetes components...
	I0829 18:06:58.712572   21143 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 18:06:58.712745   21143 addons.go:69] Setting volumesnapshots=true in profile "addons-653578"
	I0829 18:06:58.712779   21143 addons.go:234] Setting addon volumesnapshots=true in "addons-653578"
	I0829 18:06:58.712811   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.704688   21143 addons.go:234] Setting addon metrics-server=true in "addons-653578"
	I0829 18:06:58.713287   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.713320   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.713714   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.706153   21143 addons.go:69] Setting gcp-auth=true in profile "addons-653578"
	I0829 18:06:58.713970   21143 mustload.go:65] Loading cluster: addons-653578
	I0829 18:06:58.713988   21143 addons.go:69] Setting volcano=true in profile "addons-653578"
	I0829 18:06:58.714058   21143 addons.go:234] Setting addon volcano=true in "addons-653578"
	I0829 18:06:58.714113   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.714204   21143 config.go:182] Loaded profile config "addons-653578": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:06:58.714489   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.714673   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.713969   21143 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-653578"
	I0829 18:06:58.715101   21143 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-653578"
	I0829 18:06:58.706162   21143 addons.go:69] Setting helm-tiller=true in profile "addons-653578"
	I0829 18:06:58.715537   21143 addons.go:234] Setting addon helm-tiller=true in "addons-653578"
	I0829 18:06:58.715601   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.716098   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.749089   21143 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 18:06:58.749790   21143 addons.go:234] Setting addon default-storageclass=true in "addons-653578"
	I0829 18:06:58.749830   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.750299   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.750321   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.751419   21143 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:58.752536   21143 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 18:06:58.752576   21143 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 18:06:58.752658   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.754779   21143 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:06:58.754833   21143 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 18:06:58.756281   21143 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:58.756297   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 18:06:58.756345   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.756524   21143 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 18:06:58.758091   21143 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:58.758117   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 18:06:58.758172   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.762232   21143 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 18:06:58.762308   21143 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 18:06:58.762311   21143 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 18:06:58.762248   21143 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 18:06:58.763977   21143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 18:06:58.763995   21143 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:58.764086   21143 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:58.764095   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 18:06:58.764098   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 18:06:58.764150   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.764161   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.764393   21143 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:58.764407   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 18:06:58.764448   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.764593   21143 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 18:06:58.764606   21143 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 18:06:58.764644   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.766244   21143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 18:06:58.766299   21143 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 18:06:58.767570   21143 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 18:06:58.767584   21143 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 18:06:58.767629   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.767802   21143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 18:06:58.769298   21143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 18:06:58.783099   21143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 18:06:58.785178   21143 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 18:06:58.786768   21143 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 18:06:58.787883   21143 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-653578"
	I0829 18:06:58.787929   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.788258   21143 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 18:06:58.788402   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:06:58.789532   21143 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 18:06:58.792191   21143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 18:06:58.792225   21143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 18:06:58.792302   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.793571   21143 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 18:06:58.795035   21143 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 18:06:58.795052   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 18:06:58.795111   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.801721   21143 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:58.801741   21143 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 18:06:58.801803   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.813976   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.820942   21143 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 18:06:58.822452   21143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 18:06:58.822481   21143 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 18:06:58.822547   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.830302   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.835191   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:06:58.835400   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.837187   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.845093   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.851992   21143 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0829 18:06:58.852722   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.861854   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.863229   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.864382   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.864637   21143 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0829 18:06:58.869028   21143 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0829 18:06:58.872185   21143 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:06:58.872209   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0829 18:06:58.872269   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.875395   21143 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0829 18:06:58.876702   21143 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0829 18:06:58.876719   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0829 18:06:58.876778   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.878235   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.878712   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.880801   21143 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 18:06:58.881079   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.883468   21143 out.go:177]   - Using image docker.io/busybox:stable
	I0829 18:06:58.884630   21143 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:58.884642   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 18:06:58.884678   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:06:58.892424   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.896372   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:58.901947   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:06:59.245593   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 18:06:59.318269   21143 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 18:06:59.318328   21143 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 18:06:59.324529   21143 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 18:06:59.324553   21143 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 18:06:59.326826   21143 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 18:06:59.326906   21143 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 18:06:59.328185   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 18:06:59.339428   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 18:06:59.515161   21143 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0829 18:06:59.515256   21143 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0829 18:06:59.517386   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 18:06:59.518942   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 18:06:59.524335   21143 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 18:06:59.524384   21143 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 18:06:59.524931   21143 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 18:06:59.524952   21143 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 18:06:59.619523   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 18:06:59.623753   21143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 18:06:59.623779   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 18:06:59.628933   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 18:06:59.716158   21143 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 18:06:59.716204   21143 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 18:06:59.723310   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 18:06:59.818156   21143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 18:06:59.818189   21143 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 18:06:59.822410   21143 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 18:06:59.822440   21143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 18:06:59.825998   21143 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 18:06:59.826026   21143 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 18:06:59.916652   21143 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:06:59.916747   21143 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0829 18:06:59.933848   21143 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 18:06:59.933941   21143 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 18:07:00.131037   21143 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:07:00.131113   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 18:07:00.232715   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0829 18:07:00.319050   21143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 18:07:00.319134   21143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 18:07:00.334958   21143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 18:07:00.335041   21143 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 18:07:00.521726   21143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 18:07:00.521754   21143 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 18:07:00.526536   21143 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 18:07:00.526580   21143 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 18:07:00.614481   21143 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:07:00.614512   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 18:07:00.830465   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.584819966s)
	I0829 18:07:00.830525   21143 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.512167138s)
	I0829 18:07:00.831524   21143 node_ready.go:35] waiting up to 6m0s for node "addons-653578" to be "Ready" ...
	I0829 18:07:00.834780   21143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 18:07:00.834853   21143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 18:07:00.836419   21143 node_ready.go:49] node "addons-653578" has status "Ready":"True"
	I0829 18:07:00.836473   21143 node_ready.go:38] duration metric: took 4.893068ms for node "addons-653578" to be "Ready" ...
	I0829 18:07:00.836499   21143 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:07:00.916108   21143 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:07:00.916153   21143 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 18:07:00.929976   21143 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:01.019964   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 18:07:01.036597   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 18:07:01.220918   21143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 18:07:01.220954   21143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 18:07:01.233868   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 18:07:01.327145   21143 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 18:07:01.327241   21143 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 18:07:01.417322   21143 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 18:07:01.417410   21143 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 18:07:01.433790   21143 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 18:07:01.433892   21143 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 18:07:01.525301   21143 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.206851818s)
	I0829 18:07:01.525410   21143 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0829 18:07:01.921506   21143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 18:07:01.921597   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 18:07:01.924239   21143 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 18:07:01.924314   21143 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 18:07:02.029039   21143 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-653578" context rescaled to 1 replicas
	I0829 18:07:02.031688   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.703471491s)
	I0829 18:07:02.031755   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.692258595s)
	I0829 18:07:02.031804   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.514389259s)
	I0829 18:07:02.127667   21143 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 18:07:02.127700   21143 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 18:07:02.234547   21143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 18:07:02.234578   21143 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 18:07:02.514740   21143 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:07:02.514776   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 18:07:02.817122   21143 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:07:02.817153   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 18:07:02.828041   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 18:07:02.939830   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:03.333464   21143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 18:07:03.333542   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 18:07:03.337221   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:07:03.822797   21143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 18:07:03.822885   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 18:07:04.516486   21143 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:07:04.516565   21143 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 18:07:04.934626   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 18:07:05.532754   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:05.922007   21143 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 18:07:05.922089   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:07:05.947252   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:07:07.126732   21143 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 18:07:07.528186   21143 addons.go:234] Setting addon gcp-auth=true in "addons-653578"
	I0829 18:07:07.528261   21143 host.go:66] Checking if "addons-653578" exists ...
	I0829 18:07:07.528878   21143 cli_runner.go:164] Run: docker container inspect addons-653578 --format={{.State.Status}}
	I0829 18:07:07.550231   21143 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 18:07:07.550276   21143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-653578
	I0829 18:07:07.567747   21143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/addons-653578/id_rsa Username:docker}
	I0829 18:07:08.015368   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:10.016469   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:11.127186   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.608198601s)
	I0829 18:07:11.127430   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (11.507867109s)
	I0829 18:07:11.127463   21143 addons.go:475] Verifying addon ingress=true in "addons-653578"
	I0829 18:07:11.127642   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.498680488s)
	I0829 18:07:11.127838   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.40440789s)
	I0829 18:07:11.127910   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.10791272s)
	I0829 18:07:11.127938   21143 addons.go:475] Verifying addon registry=true in "addons-653578"
	I0829 18:07:11.127850   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (10.895043348s)
	I0829 18:07:11.128101   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.091467913s)
	I0829 18:07:11.128253   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.894168072s)
	I0829 18:07:11.128266   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.300180283s)
	I0829 18:07:11.128381   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.791084011s)
	W0829 18:07:11.129023   21143 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:11.129054   21143 retry.go:31] will retry after 141.158823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 18:07:11.129135   21143 addons.go:475] Verifying addon metrics-server=true in "addons-653578"
	I0829 18:07:11.129372   21143 out.go:177] * Verifying ingress addon...
	I0829 18:07:11.129400   21143 out.go:177] * Verifying registry addon...
	I0829 18:07:11.130754   21143 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-653578 service yakd-dashboard -n yakd-dashboard
	
	I0829 18:07:11.132768   21143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 18:07:11.133967   21143 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0829 18:07:11.138246   21143 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 18:07:11.138269   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.139299   21143 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 18:07:11.139315   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:11.271310   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 18:07:11.716804   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:11.717928   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.140562   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.142147   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.440427   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:12.536845   21143 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.986579808s)
	I0829 18:07:12.537042   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.602372714s)
	I0829 18:07:12.537097   21143 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-653578"
	I0829 18:07:12.538561   21143 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 18:07:12.538572   21143 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 18:07:12.540520   21143 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 18:07:12.541423   21143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 18:07:12.542340   21143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 18:07:12.542360   21143 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 18:07:12.618047   21143 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 18:07:12.618078   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:12.631303   21143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 18:07:12.631329   21143 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 18:07:12.718292   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:12.718875   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:12.726039   21143 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:12.726060   21143 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 18:07:12.825251   21143 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 18:07:13.119074   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.219929   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.220212   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:13.618085   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:13.717608   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:13.718765   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.035102   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.763729818s)
	I0829 18:07:14.046624   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.136572   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.237069   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.247472   21143 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.422178516s)
	I0829 18:07:14.249502   21143 addons.go:475] Verifying addon gcp-auth=true in "addons-653578"
	I0829 18:07:14.251235   21143 out.go:177] * Verifying gcp-auth addon...
	I0829 18:07:14.253777   21143 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 18:07:14.336657   21143 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:07:14.545822   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:14.636968   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:14.637729   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:14.935898   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:15.046027   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.137232   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.137685   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:15.546284   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:15.636418   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:15.637252   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.047030   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.137157   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.137588   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.546923   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:16.646849   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:16.647523   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:16.936397   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:17.046460   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.136428   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.137638   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:17.547000   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:17.636199   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:17.639076   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.045437   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.136780   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.137981   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:18.546218   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:18.636163   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:18.637240   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.045512   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.136805   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.137115   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:19.435084   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:19.545283   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:19.636522   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:19.637409   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.044771   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.135850   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.137010   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:20.547086   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:20.637013   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:20.637423   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.045737   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.136067   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.138274   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:21.435452   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:21.546979   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:21.636943   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:21.637202   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.045858   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.136969   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.137924   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:22.545796   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:22.636911   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:22.637924   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.045595   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.137027   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.138300   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:23.435597   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:23.546139   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:23.635645   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:23.637880   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.046607   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.147009   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:24.147103   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:24.546306   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:24.646566   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:24.646847   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.045927   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.136324   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:25.137269   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.545927   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:25.645925   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:25.646680   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:25.935419   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:26.046029   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.135679   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:26.137691   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:26.545572   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:26.636608   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:26.636982   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.045536   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.136938   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:27.137873   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:27.545142   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:27.636242   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:27.637332   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.045591   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.137685   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:28.215511   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:28.437761   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:28.545080   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:28.636408   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:28.637105   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.046269   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.135674   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:29.137162   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:29.546111   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:29.636434   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:29.637127   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.045319   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.136295   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:30.137311   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.545021   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:30.636227   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:30.637730   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:30.936059   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:31.046261   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.137327   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:31.137916   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:31.546735   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:31.636248   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:31.638133   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.065805   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.137394   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:32.138502   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:32.545835   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:32.636742   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:32.637076   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.045421   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.136340   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:33.137661   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:33.435763   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:33.546195   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:33.635720   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:33.637739   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.046196   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.136201   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:34.137312   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:34.545862   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:34.636424   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:34.637415   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.046127   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.136295   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:35.137138   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:35.435991   21143 pod_ready.go:103] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"False"
	I0829 18:07:35.546492   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:35.725135   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:35.726013   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.046070   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.136161   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:36.137267   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:36.437986   21143 pod_ready.go:93] pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:36.438008   21143 pod_ready.go:82] duration metric: took 35.508001419s for pod "coredns-6f6b679f8f-ggmm6" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.438017   21143 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-pds5d" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.439558   21143 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-pds5d" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-pds5d" not found
	I0829 18:07:36.439580   21143 pod_ready.go:82] duration metric: took 1.556257ms for pod "coredns-6f6b679f8f-pds5d" in "kube-system" namespace to be "Ready" ...
	E0829 18:07:36.439588   21143 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-pds5d" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-pds5d" not found
	I0829 18:07:36.439595   21143 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-653578" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.443124   21143 pod_ready.go:93] pod "etcd-addons-653578" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:36.443141   21143 pod_ready.go:82] duration metric: took 3.540281ms for pod "etcd-addons-653578" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.443149   21143 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-653578" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.446687   21143 pod_ready.go:93] pod "kube-apiserver-addons-653578" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:36.446703   21143 pod_ready.go:82] duration metric: took 3.548189ms for pod "kube-apiserver-addons-653578" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.446711   21143 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-653578" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.449888   21143 pod_ready.go:93] pod "kube-controller-manager-addons-653578" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:36.449904   21143 pod_ready.go:82] duration metric: took 3.186143ms for pod "kube-controller-manager-addons-653578" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.449914   21143 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g5thg" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.545785   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:36.634048   21143 pod_ready.go:93] pod "kube-proxy-g5thg" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:36.634072   21143 pod_ready.go:82] duration metric: took 184.149851ms for pod "kube-proxy-g5thg" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.634086   21143 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-653578" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:36.635966   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:36.637025   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.034380   21143 pod_ready.go:93] pod "kube-scheduler-addons-653578" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:37.034402   21143 pod_ready.go:82] duration metric: took 400.307832ms for pod "kube-scheduler-addons-653578" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:37.034415   21143 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7bg4q" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:37.045501   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.136535   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:37.137393   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.434475   21143 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7bg4q" in "kube-system" namespace has status "Ready":"True"
	I0829 18:07:37.434497   21143 pod_ready.go:82] duration metric: took 400.075619ms for pod "nvidia-device-plugin-daemonset-7bg4q" in "kube-system" namespace to be "Ready" ...
	I0829 18:07:37.434505   21143 pod_ready.go:39] duration metric: took 36.59798237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 18:07:37.434522   21143 api_server.go:52] waiting for apiserver process to appear ...
	I0829 18:07:37.434568   21143 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:07:37.447806   21143 api_server.go:72] duration metric: took 38.743351776s to wait for apiserver process to appear ...
	I0829 18:07:37.447829   21143 api_server.go:88] waiting for apiserver healthz status ...
	I0829 18:07:37.447847   21143 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0829 18:07:37.451367   21143 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0829 18:07:37.452175   21143 api_server.go:141] control plane version: v1.31.0
	I0829 18:07:37.452201   21143 api_server.go:131] duration metric: took 4.364084ms to wait for apiserver health ...
	I0829 18:07:37.452211   21143 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 18:07:37.546121   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:37.635795   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:37.636742   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:37.639155   21143 system_pods.go:59] 18 kube-system pods found
	I0829 18:07:37.639177   21143 system_pods.go:61] "coredns-6f6b679f8f-ggmm6" [5ed9a409-937d-4e35-b357-2dae71798016] Running
	I0829 18:07:37.639187   21143 system_pods.go:61] "csi-hostpath-attacher-0" [de475401-3e6b-4499-9daf-e67470defb12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:37.639198   21143 system_pods.go:61] "csi-hostpath-resizer-0" [feee6f5f-9ef4-484f-b286-2948a4e55165] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:37.639212   21143 system_pods.go:61] "csi-hostpathplugin-m9nwt" [e4bd194f-8962-405a-9653-bf3062ad470f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:37.639219   21143 system_pods.go:61] "etcd-addons-653578" [2ab93a5c-cde8-4d68-a6a6-d5bd567f6dd7] Running
	I0829 18:07:37.639227   21143 system_pods.go:61] "kube-apiserver-addons-653578" [f9451288-b3a7-45cf-aea1-112269f3cdca] Running
	I0829 18:07:37.639231   21143 system_pods.go:61] "kube-controller-manager-addons-653578" [2e1a01c9-f572-47fd-9345-597825b9c2b1] Running
	I0829 18:07:37.639235   21143 system_pods.go:61] "kube-ingress-dns-minikube" [2a3ec2dd-9619-46e4-8fd6-642878d38f5c] Running
	I0829 18:07:37.639239   21143 system_pods.go:61] "kube-proxy-g5thg" [58c464b9-0e91-40a4-affe-0ac092954d17] Running
	I0829 18:07:37.639244   21143 system_pods.go:61] "kube-scheduler-addons-653578" [378c69f7-c956-4901-a0f1-2bfe4060b1ba] Running
	I0829 18:07:37.639249   21143 system_pods.go:61] "metrics-server-8988944d9-hfkph" [98d84941-9a37-497f-92d4-aeb71bae507f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:37.639254   21143 system_pods.go:61] "nvidia-device-plugin-daemonset-7bg4q" [dd96e728-76df-48ee-ade4-d42404749188] Running
	I0829 18:07:37.639258   21143 system_pods.go:61] "registry-6fb4cdfc84-zc54m" [a3caeea3-7234-42cc-b0fb-1182264d0d96] Running
	I0829 18:07:37.639263   21143 system_pods.go:61] "registry-proxy-vvcrx" [b379348e-09dc-44aa-8751-a98fd763a638] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:37.639270   21143 system_pods.go:61] "snapshot-controller-56fcc65765-mg6bh" [7fdef5ea-88fe-47ee-85a7-818a80383b38] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:37.639275   21143 system_pods.go:61] "snapshot-controller-56fcc65765-qff92" [f5d4e5f3-4d3e-4049-aaa9-ea2aa2cd619a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:37.639283   21143 system_pods.go:61] "storage-provisioner" [4cc883e1-2921-4759-b44b-deb685de789e] Running
	I0829 18:07:37.639289   21143 system_pods.go:61] "tiller-deploy-b48cc5f79-nzc4x" [97132718-1159-440c-9985-e5c297ed90f0] Running
	I0829 18:07:37.639298   21143 system_pods.go:74] duration metric: took 187.0807ms to wait for pod list to return data ...
	I0829 18:07:37.639311   21143 default_sa.go:34] waiting for default service account to be created ...
	I0829 18:07:37.833930   21143 default_sa.go:45] found service account: "default"
	I0829 18:07:37.833951   21143 default_sa.go:55] duration metric: took 194.631016ms for default service account to be created ...
	I0829 18:07:37.833961   21143 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 18:07:38.039906   21143 system_pods.go:86] 18 kube-system pods found
	I0829 18:07:38.039938   21143 system_pods.go:89] "coredns-6f6b679f8f-ggmm6" [5ed9a409-937d-4e35-b357-2dae71798016] Running
	I0829 18:07:38.039951   21143 system_pods.go:89] "csi-hostpath-attacher-0" [de475401-3e6b-4499-9daf-e67470defb12] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 18:07:38.039960   21143 system_pods.go:89] "csi-hostpath-resizer-0" [feee6f5f-9ef4-484f-b286-2948a4e55165] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 18:07:38.039970   21143 system_pods.go:89] "csi-hostpathplugin-m9nwt" [e4bd194f-8962-405a-9653-bf3062ad470f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 18:07:38.039980   21143 system_pods.go:89] "etcd-addons-653578" [2ab93a5c-cde8-4d68-a6a6-d5bd567f6dd7] Running
	I0829 18:07:38.039990   21143 system_pods.go:89] "kube-apiserver-addons-653578" [f9451288-b3a7-45cf-aea1-112269f3cdca] Running
	I0829 18:07:38.039998   21143 system_pods.go:89] "kube-controller-manager-addons-653578" [2e1a01c9-f572-47fd-9345-597825b9c2b1] Running
	I0829 18:07:38.040005   21143 system_pods.go:89] "kube-ingress-dns-minikube" [2a3ec2dd-9619-46e4-8fd6-642878d38f5c] Running
	I0829 18:07:38.040013   21143 system_pods.go:89] "kube-proxy-g5thg" [58c464b9-0e91-40a4-affe-0ac092954d17] Running
	I0829 18:07:38.040019   21143 system_pods.go:89] "kube-scheduler-addons-653578" [378c69f7-c956-4901-a0f1-2bfe4060b1ba] Running
	I0829 18:07:38.040030   21143 system_pods.go:89] "metrics-server-8988944d9-hfkph" [98d84941-9a37-497f-92d4-aeb71bae507f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 18:07:38.040036   21143 system_pods.go:89] "nvidia-device-plugin-daemonset-7bg4q" [dd96e728-76df-48ee-ade4-d42404749188] Running
	I0829 18:07:38.040044   21143 system_pods.go:89] "registry-6fb4cdfc84-zc54m" [a3caeea3-7234-42cc-b0fb-1182264d0d96] Running
	I0829 18:07:38.040052   21143 system_pods.go:89] "registry-proxy-vvcrx" [b379348e-09dc-44aa-8751-a98fd763a638] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 18:07:38.040064   21143 system_pods.go:89] "snapshot-controller-56fcc65765-mg6bh" [7fdef5ea-88fe-47ee-85a7-818a80383b38] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:38.040076   21143 system_pods.go:89] "snapshot-controller-56fcc65765-qff92" [f5d4e5f3-4d3e-4049-aaa9-ea2aa2cd619a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 18:07:38.040084   21143 system_pods.go:89] "storage-provisioner" [4cc883e1-2921-4759-b44b-deb685de789e] Running
	I0829 18:07:38.040092   21143 system_pods.go:89] "tiller-deploy-b48cc5f79-nzc4x" [97132718-1159-440c-9985-e5c297ed90f0] Running
	I0829 18:07:38.040102   21143 system_pods.go:126] duration metric: took 206.136297ms to wait for k8s-apps to be running ...
	I0829 18:07:38.040114   21143 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 18:07:38.040167   21143 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:07:38.045145   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.054689   21143 system_svc.go:56] duration metric: took 14.570263ms WaitForService to wait for kubelet
	I0829 18:07:38.054715   21143 kubeadm.go:582] duration metric: took 39.3502636s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 18:07:38.054738   21143 node_conditions.go:102] verifying NodePressure condition ...
	I0829 18:07:38.137164   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:38.137500   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:38.234888   21143 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0829 18:07:38.234917   21143 node_conditions.go:123] node cpu capacity is 8
	I0829 18:07:38.234932   21143 node_conditions.go:105] duration metric: took 180.184124ms to run NodePressure ...
	I0829 18:07:38.234946   21143 start.go:241] waiting for startup goroutines ...
	I0829 18:07:38.546333   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:38.636515   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:38.637473   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.054239   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.136201   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:39.137373   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:39.546333   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:39.636385   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:39.637938   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.046136   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.136215   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:40.137297   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:40.547077   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:40.647225   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:40.647520   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.046340   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.145334   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:41.147507   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:41.546197   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:41.637121   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:41.637303   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.045649   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:42.136294   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:42.137587   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:42.545764   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:42.646789   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:42.647037   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.045998   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.136125   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:43.137576   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:43.546176   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:43.636504   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:43.637665   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.045869   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.137681   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:44.138270   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:44.546041   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:44.636118   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:44.637488   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.046156   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.136165   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:45.137363   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:45.607471   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:45.708200   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:45.708603   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.045895   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.136685   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:46.138440   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:46.546200   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:46.636177   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:46.637086   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.045273   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.136268   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:47.137348   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:47.546126   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:47.636777   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:47.637920   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.045838   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.136062   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:48.137104   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:48.545556   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:48.637099   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:48.637433   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.046798   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.136729   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:49.138183   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:49.546071   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:49.640667   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:49.641170   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.046357   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.136320   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:50.137229   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:50.545724   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:50.637053   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:50.637792   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.045786   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.135807   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:51.138124   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:51.545126   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:51.638105   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:51.638205   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.045638   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.136864   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 18:07:52.137808   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.546200   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:52.646604   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:52.647063   21143 kapi.go:107] duration metric: took 41.514297519s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 18:07:53.046170   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.146179   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:53.546179   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:53.645405   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.045317   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.137996   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:54.545855   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:54.642636   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:55.046484   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.137733   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:55.545329   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:55.637583   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.047218   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.138504   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:56.546135   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:56.638918   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.046307   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.138564   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:57.546125   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:57.638674   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.048924   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.146711   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:58.546273   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:58.637735   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.046204   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.137896   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:07:59.545321   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:07:59.637481   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.046183   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.138585   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:00.545902   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:00.646551   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.047371   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.137844   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:01.546476   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:01.638189   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.046528   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.138242   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:02.546110   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:02.638212   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.046080   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.137745   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:03.545053   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:03.637563   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.045183   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.137906   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:04.546650   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:04.637976   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.046889   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.154614   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:05.546603   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:05.637946   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.046231   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.138649   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:06.545533   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:06.638258   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.046898   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.139039   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:07.545862   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:07.637141   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.046721   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.138482   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:08.547324   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:08.646740   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.046757   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.138641   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:09.546834   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:09.646625   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:10.045979   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:10.138335   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:10.552766   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:10.653758   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.045904   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:11.138644   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:11.546970   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:11.647248   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.046023   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:12.138445   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:12.545884   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:12.638105   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.046430   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:13.138730   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:13.546369   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:13.637988   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.046108   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 18:08:14.138942   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:14.546102   21143 kapi.go:107] duration metric: took 1m2.004672921s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 18:08:14.638635   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.138326   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:15.638882   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.138602   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:16.638886   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.138950   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:17.638926   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:18.138331   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:18.637699   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:19.137500   21143 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 18:08:19.637873   21143 kapi.go:107] duration metric: took 1m8.503907443s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 18:08:37.257486   21143 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 18:08:37.257506   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:37.757493   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:38.258427   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:38.757449   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:39.257606   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:39.757249   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:40.257246   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:40.757124   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:41.257867   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:41.756795   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:42.257742   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:42.757682   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:43.258153   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:43.757925   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:44.257861   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:44.756894   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:45.257152   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:45.756966   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:46.257747   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:46.757830   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:47.257522   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:47.757517   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:48.257561   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:48.756393   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:49.257311   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:49.757223   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:50.257255   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:50.757015   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:51.256890   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:51.758124   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:52.257648   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:52.757367   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:53.257489   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:53.757638   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:54.257652   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:54.757390   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:55.257319   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:55.757365   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:56.257323   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:56.757412   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:57.257545   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:57.757528   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:58.257628   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:58.757079   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:59.257398   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:08:59.757347   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:00.257604   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:00.757479   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:01.257073   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:01.757335   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:02.257488   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:02.757492   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:03.257587   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:03.757517   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:04.257859   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:04.757466   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:05.257773   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:05.757668   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:06.257562   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:06.757868   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:07.257809   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:07.756747   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:08.257871   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:08.757242   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:09.257772   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:09.756831   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:10.257974   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:10.757453   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:11.257291   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:11.757348   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:12.257637   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:12.757982   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:13.256858   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:13.758453   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:14.257940   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:14.757144   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:15.257262   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:15.757148   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:16.258180   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:16.757375   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:17.257482   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:17.757459   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:18.257433   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:18.757239   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:19.257652   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:19.757393   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:20.258248   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:20.757326   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:21.257091   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:21.757349   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:22.257608   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:22.757634   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:23.257495   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:23.757041   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:24.257239   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:24.757294   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:25.257669   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:25.757439   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:26.257799   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:26.757936   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:27.257719   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:27.757842   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:28.257926   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:28.757073   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:29.256364   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:29.757311   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:30.257391   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:30.757260   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:31.256932   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:31.756755   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:32.257548   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:32.757515   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:33.257449   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:33.757014   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:34.257664   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:34.756800   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:35.257050   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:35.757031   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:36.256787   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:36.758524   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:37.257678   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:37.757450   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:38.257651   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:38.756581   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:39.257447   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:39.757080   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:40.257303   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:40.757232   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:41.257354   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:41.757765   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:42.257684   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:42.757795   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:43.257535   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:43.757337   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:44.257451   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:44.757471   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:45.257727   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:45.757736   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:46.257904   21143 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 18:09:46.757328   21143 kapi.go:107] duration metric: took 2m32.503550654s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 18:09:46.758793   21143 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-653578 cluster.
	I0829 18:09:46.760127   21143 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 18:09:46.761378   21143 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 18:09:46.762683   21143 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, default-storageclass, volcano, storage-provisioner, helm-tiller, inspektor-gadget, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0829 18:09:46.764030   21143 addons.go:510] duration metric: took 2m48.059524575s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner default-storageclass volcano storage-provisioner helm-tiller inspektor-gadget metrics-server yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0829 18:09:46.764075   21143 start.go:246] waiting for cluster config update ...
	I0829 18:09:46.764098   21143 start.go:255] writing updated cluster config ...
	I0829 18:09:46.764471   21143 ssh_runner.go:195] Run: rm -f paused
	I0829 18:09:46.813280   21143 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0829 18:09:46.815129   21143 out.go:177] * Done! kubectl is now configured to use "addons-653578" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 29 18:19:20 addons-653578 dockerd[1342]: time="2024-08-29T18:19:20.127480997Z" level=info msg="ignoring event" container=ba5204e74ac6b919731914dcad306cab55d691451be16b07a6378c27a20829c4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:21 addons-653578 dockerd[1342]: time="2024-08-29T18:19:21.466866975Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:19:21 addons-653578 dockerd[1342]: time="2024-08-29T18:19:21.469147839Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 29 18:19:22 addons-653578 dockerd[1342]: time="2024-08-29T18:19:22.458758724Z" level=info msg="ignoring event" container=ec83ef98d2c264e749c8a12a7be7da0337a9e6a250d69f8c6f4766dcbd486910 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:22 addons-653578 dockerd[1342]: time="2024-08-29T18:19:22.462806049Z" level=info msg="ignoring event" container=0e736b369be23b7ebd578d2a5015e12d97192dd2ed04396336d03f1dd675c02f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:22 addons-653578 dockerd[1342]: time="2024-08-29T18:19:22.639582053Z" level=info msg="ignoring event" container=3781bb09a16e9f35fed1c7a9ea040c8e171f03b23e806dcebdbd1d4e61e7ebd4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:22 addons-653578 dockerd[1342]: time="2024-08-29T18:19:22.657999882Z" level=info msg="ignoring event" container=f68fa0629ed8a144e261d271b6bf27aa6a9e9151fd1353c2ba2966fc7ec8a38d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:24 addons-653578 cri-dockerd[1607]: time="2024-08-29T18:19:24Z" level=error msg="error getting RW layer size for container ID '162519a19d622813fbf17359749df71b228cba1558a495ccf83057e16536417d': Error response from daemon: No such container: 162519a19d622813fbf17359749df71b228cba1558a495ccf83057e16536417d"
	Aug 29 18:19:24 addons-653578 cri-dockerd[1607]: time="2024-08-29T18:19:24Z" level=error msg="Set backoffDuration to : 1m0s for container ID '162519a19d622813fbf17359749df71b228cba1558a495ccf83057e16536417d'"
	Aug 29 18:19:25 addons-653578 cri-dockerd[1607]: time="2024-08-29T18:19:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc9827ffd061c02e2f9ca16a976dbc77214a6ba2d125bb4febb50929125d74a4/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	Aug 29 18:19:25 addons-653578 dockerd[1342]: time="2024-08-29T18:19:25.273507051Z" level=warning msg="reference for unknown type: " digest="sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971" remote="ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971"
	Aug 29 18:19:29 addons-653578 dockerd[1342]: time="2024-08-29T18:19:29.448235347Z" level=info msg="ignoring event" container=65fe047efb332a8ee50b5f5d2b331db8f37af968fbe05cecf6574d24f0d96953 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:29 addons-653578 dockerd[1342]: time="2024-08-29T18:19:29.528678771Z" level=info msg="ignoring event" container=748d8bef2a0ef8e78a739591e45c3381468f2dfb515b5e212ec59d72cc68d8d8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:30 addons-653578 cri-dockerd[1607]: time="2024-08-29T18:19:30Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.25.0@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:899d106eeb55b0afc4ee6e51c03bc4418de0bd0e79c39744d4d0d751aae6a971"
	Aug 29 18:19:30 addons-653578 dockerd[1342]: time="2024-08-29T18:19:30.155649920Z" level=info msg="ignoring event" container=b4365ed53c4a239b0a1a7dacb509e1efded614f7d33d95938336bbf34dbf93e9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:30 addons-653578 dockerd[1342]: time="2024-08-29T18:19:30.219484788Z" level=info msg="ignoring event" container=2516b796ed310ddc3c9d36f81ffbb972f3aa817a78635ebde1fc6bf7107ffc6f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:34 addons-653578 cri-dockerd[1607]: time="2024-08-29T18:19:34Z" level=error msg="error getting RW layer size for container ID '748d8bef2a0ef8e78a739591e45c3381468f2dfb515b5e212ec59d72cc68d8d8': Error response from daemon: No such container: 748d8bef2a0ef8e78a739591e45c3381468f2dfb515b5e212ec59d72cc68d8d8"
	Aug 29 18:19:34 addons-653578 cri-dockerd[1607]: time="2024-08-29T18:19:34Z" level=error msg="Set backoffDuration to : 1m0s for container ID '748d8bef2a0ef8e78a739591e45c3381468f2dfb515b5e212ec59d72cc68d8d8'"
	Aug 29 18:19:37 addons-653578 dockerd[1342]: time="2024-08-29T18:19:37.149395854Z" level=info msg="ignoring event" container=cf3144f2dd89c098358c6fbb4581183267ced198527e8536dc27cc4a2ebb8f32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:37 addons-653578 dockerd[1342]: time="2024-08-29T18:19:37.260023170Z" level=info msg="ignoring event" container=bc9827ffd061c02e2f9ca16a976dbc77214a6ba2d125bb4febb50929125d74a4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:39 addons-653578 dockerd[1342]: time="2024-08-29T18:19:39.200502383Z" level=info msg="ignoring event" container=e808e6d41e0fc74e8f8a9224eebb2435bdc0aa95678315884406b24127d261f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:39 addons-653578 dockerd[1342]: time="2024-08-29T18:19:39.719702806Z" level=info msg="ignoring event" container=ee84a1b5365d81a4f89ba758849d1d67eca9a2df21312ed43b9221a4e00e03bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:39 addons-653578 dockerd[1342]: time="2024-08-29T18:19:39.794197340Z" level=info msg="ignoring event" container=b6df3bb807a63bd4c39483503d9c97995f1420127f332035cb930273dd0c1f90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:39 addons-653578 dockerd[1342]: time="2024-08-29T18:19:39.870473027Z" level=info msg="ignoring event" container=fe6cfac0435c899c2ae3de9ff820650f1ad897d8e25f4deeeb137fe7b4d61a85 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:19:39 addons-653578 dockerd[1342]: time="2024-08-29T18:19:39.936121723Z" level=info msg="ignoring event" container=a1c41c2184c00b12a7b00e4b14b6edcc2d8001a1d0e8a861392a07ea2e9da470 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	23f7ed928c504       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  24 seconds ago      Running             hello-world-app           0                   9997a43ff322d       hello-world-app-55bf9c44b4-sgzzg
	3fa16f9ecd8ad       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                34 seconds ago      Running             nginx                     0                   a37dff95cbcd1       nginx
	559e0ab7031c8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   06244283fb857       gcp-auth-89d5ffd79-d4998
	fc1f34f1f4344       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   51340a151a25a       ingress-nginx-admission-patch-xfn2r
	6d2e4b9d698c1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   4d0bec91a50fd       ingress-nginx-admission-create-q6ss8
	b6df3bb807a63       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Exited              registry-proxy            0                   a1c41c2184c00       registry-proxy-vvcrx
	ee84a1b5365d8       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                             12 minutes ago      Exited              registry                  0                   fe6cfac0435c8       registry-6fb4cdfc84-zc54m
	d20858846ac48       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   cf61bff70a708       storage-provisioner
	8410e1d3a401b       cbb01a7bd410d                                                                                                                12 minutes ago      Running             coredns                   0                   6c8c0af996e0e       coredns-6f6b679f8f-ggmm6
	9ba227f91ed45       ad83b2ca7b09e                                                                                                                12 minutes ago      Running             kube-proxy                0                   8bd6dfe579f25       kube-proxy-g5thg
	4ee9277f7fd40       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   764665a2bb421       etcd-addons-653578
	5c429e28082ae       045733566833c                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   535abdac600bc       kube-controller-manager-addons-653578
	f9b019e8ce66a       604f5db92eaa8                                                                                                                12 minutes ago      Running             kube-apiserver            0                   878e64a25a60e       kube-apiserver-addons-653578
	a1dddf242b55c       1766f54c897f0                                                                                                                12 minutes ago      Running             kube-scheduler            0                   614b79cd97f42       kube-scheduler-addons-653578
	
	
	==> coredns [8410e1d3a401] <==
	[INFO] 10.244.0.8:55795 - 57533 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000102395s
	[INFO] 10.244.0.8:51517 - 3961 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058298s
	[INFO] 10.244.0.8:51517 - 34939 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092431s
	[INFO] 10.244.0.8:56420 - 44374 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004600488s
	[INFO] 10.244.0.8:56420 - 40017 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004849465s
	[INFO] 10.244.0.8:45639 - 58659 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004201566s
	[INFO] 10.244.0.8:45639 - 4647 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00496081s
	[INFO] 10.244.0.8:35728 - 22737 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003966642s
	[INFO] 10.244.0.8:35728 - 37588 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004470042s
	[INFO] 10.244.0.8:55922 - 63300 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000081823s
	[INFO] 10.244.0.8:55922 - 54855 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141798s
	[INFO] 10.244.0.26:47775 - 35949 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000335291s
	[INFO] 10.244.0.26:49207 - 33437 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000381487s
	[INFO] 10.244.0.26:52054 - 38049 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011894s
	[INFO] 10.244.0.26:43771 - 57643 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142305s
	[INFO] 10.244.0.26:59632 - 27521 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097455s
	[INFO] 10.244.0.26:43889 - 16077 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132606s
	[INFO] 10.244.0.26:36724 - 37748 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00711692s
	[INFO] 10.244.0.26:49206 - 42979 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007616664s
	[INFO] 10.244.0.26:57411 - 4182 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005370194s
	[INFO] 10.244.0.26:51229 - 24772 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.005595068s
	[INFO] 10.244.0.26:53799 - 31976 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004779618s
	[INFO] 10.244.0.26:59386 - 37193 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005264582s
	[INFO] 10.244.0.26:45440 - 15620 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002037662s
	[INFO] 10.244.0.26:51423 - 43189 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.00238371s
	
	
	==> describe nodes <==
	Name:               addons-653578
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-653578
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-653578
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T18_06_54_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-653578
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:06:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-653578
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:19:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:19:27 +0000   Thu, 29 Aug 2024 18:06:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:19:27 +0000   Thu, 29 Aug 2024 18:06:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:19:27 +0000   Thu, 29 Aug 2024 18:06:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:19:27 +0000   Thu, 29 Aug 2024 18:06:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-653578
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859320Ki
	  pods:               110
	System Info:
	  Machine ID:                 67ddfa06d90c4e1f803693df8ad61014
	  System UUID:                b225e8e8-4a18-4ed5-9093-ca893406e888
	  Boot ID:                    159b1acb-a9cc-4f6c-ab3a-a431548eb42b
	  Kernel Version:             5.15.0-1067-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-sgzzg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  gcp-auth                    gcp-auth-89d5ffd79-d4998                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-6f6b679f8f-ggmm6                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-653578                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-653578             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-653578    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-g5thg                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-653578             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-653578 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-653578 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x6 over 12m)  kubelet          Node addons-653578 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-653578 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-653578 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-653578 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-653578 event: Registered Node addons-653578 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9e c8 4f 37 61 c9 08 06
	[  +3.282139] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 5b 79 2c 4b 0f 08 06
	[  +5.424865] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ca bb 43 1a bc 01 08 06
	[  +0.629942] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a e5 ee fa 1d 81 08 06
	[  +0.233238] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 32 03 f7 7a 77 92 08 06
	[  +7.012376] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 56 66 df 27 f0 65 08 06
	[  +1.028725] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 82 cd d6 0e 57 89 08 06
	[Aug29 18:09] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 0a a2 4b c6 31 08 06
	[  +0.031720] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 56 dd bc 9e 3b b7 08 06
	[ +27.571282] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff d6 52 15 ba 88 ed 08 06
	[  +0.000489] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 92 85 e7 fa 93 ed 08 06
	[Aug29 18:18] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 bb 88 d9 1a f4 08 06
	[Aug29 18:19] IPv4: martian source 10.244.0.35 from 10.244.0.22, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 66 df 27 f0 65 08 06
	
	
	==> etcd [4ee9277f7fd4] <==
	{"level":"info","ts":"2024-08-29T18:06:49.632031Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:49.632060Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:06:49.632321Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T18:06:49.632371Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T18:06:49.632907Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:49.633343Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:49.633379Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:06:49.633386Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:06:49.633397Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:06:49.634289Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-29T18:06:49.634575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-08-29T18:07:45.396850Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.225887ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-29T18:07:45.396916Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.979371ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-08-29T18:07:45.396926Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.001713ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-08-29T18:07:45.396945Z","caller":"traceutil/trace.go:171","msg":"trace[1461856671] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:1105; }","duration":"137.370451ms","start":"2024-08-29T18:07:45.259555Z","end":"2024-08-29T18:07:45.396926Z","steps":["trace[1461856671] 'range keys from in-memory index tree'  (duration: 137.153366ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:07:45.396962Z","caller":"traceutil/trace.go:171","msg":"trace[2044221907] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1105; }","duration":"141.029697ms","start":"2024-08-29T18:07:45.255919Z","end":"2024-08-29T18:07:45.396949Z","steps":["trace[2044221907] 'range keys from in-memory index tree'  (duration: 140.918985ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:07:45.396965Z","caller":"traceutil/trace.go:171","msg":"trace[353303635] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1105; }","duration":"118.039883ms","start":"2024-08-29T18:07:45.278913Z","end":"2024-08-29T18:07:45.396953Z","steps":["trace[353303635] 'range keys from in-memory index tree'  (duration: 117.855574ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:07:45.522273Z","caller":"traceutil/trace.go:171","msg":"trace[850829868] linearizableReadLoop","detail":"{readStateIndex:1130; appliedIndex:1129; }","duration":"119.860704ms","start":"2024-08-29T18:07:45.402393Z","end":"2024-08-29T18:07:45.522254Z","steps":["trace[850829868] 'read index received'  (duration: 119.698808ms)","trace[850829868] 'applied index is now lower than readState.Index'  (duration: 161.327µs)"],"step_count":2}
	{"level":"info","ts":"2024-08-29T18:07:45.522299Z","caller":"traceutil/trace.go:171","msg":"trace[1584357631] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"122.389756ms","start":"2024-08-29T18:07:45.399895Z","end":"2024-08-29T18:07:45.522285Z","steps":["trace[1584357631] 'process raft request'  (duration: 122.254482ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-29T18:07:45.522421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"120.000197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/gcp-auth/gcp-auth-certs-create.17f04548531a0ae8\" ","response":"range_response_count:1 size:916"}
	{"level":"info","ts":"2024-08-29T18:07:45.522456Z","caller":"traceutil/trace.go:171","msg":"trace[1892644417] range","detail":"{range_begin:/registry/events/gcp-auth/gcp-auth-certs-create.17f04548531a0ae8; range_end:; response_count:1; response_revision:1106; }","duration":"120.05474ms","start":"2024-08-29T18:07:45.402390Z","end":"2024-08-29T18:07:45.522445Z","steps":["trace[1892644417] 'agreement among raft nodes before linearized reading'  (duration: 119.932379ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:08:17.972859Z","caller":"traceutil/trace.go:171","msg":"trace[1233127825] transaction","detail":"{read_only:false; response_revision:1285; number_of_response:1; }","duration":"116.023049ms","start":"2024-08-29T18:08:17.856822Z","end":"2024-08-29T18:08:17.972845Z","steps":["trace[1233127825] 'process raft request'  (duration: 115.935166ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:16:49.725409Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1912}
	{"level":"info","ts":"2024-08-29T18:16:49.750589Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1912,"took":"24.572073ms","hash":1834671684,"current-db-size-bytes":8712192,"current-db-size":"8.7 MB","current-db-size-in-use-bytes":4984832,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-08-29T18:16:49.750631Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1834671684,"revision":1912,"compact-revision":-1}
	
	
	==> gcp-auth [559e0ab7031c] <==
	2024/08/29 18:10:26 Ready to write response ...
	2024/08/29 18:18:29 Ready to marshal response ...
	2024/08/29 18:18:29 Ready to write response ...
	2024/08/29 18:18:29 Ready to marshal response ...
	2024/08/29 18:18:29 Ready to write response ...
	2024/08/29 18:18:39 Ready to marshal response ...
	2024/08/29 18:18:39 Ready to write response ...
	2024/08/29 18:18:40 Ready to marshal response ...
	2024/08/29 18:18:40 Ready to write response ...
	2024/08/29 18:18:40 Ready to marshal response ...
	2024/08/29 18:18:40 Ready to write response ...
	2024/08/29 18:18:45 Ready to marshal response ...
	2024/08/29 18:18:45 Ready to write response ...
	2024/08/29 18:19:02 Ready to marshal response ...
	2024/08/29 18:19:02 Ready to write response ...
	2024/08/29 18:19:05 Ready to marshal response ...
	2024/08/29 18:19:05 Ready to write response ...
	2024/08/29 18:19:14 Ready to marshal response ...
	2024/08/29 18:19:14 Ready to write response ...
	2024/08/29 18:19:24 Ready to marshal response ...
	2024/08/29 18:19:24 Ready to write response ...
	2024/08/29 18:19:24 Ready to marshal response ...
	2024/08/29 18:19:24 Ready to write response ...
	2024/08/29 18:19:24 Ready to marshal response ...
	2024/08/29 18:19:24 Ready to write response ...
	
	
	==> kernel <==
	 18:19:40 up  1:02,  0 users,  load average: 0.40, 0.40, 0.35
	Linux addons-653578 5.15.0-1067-gcp #75~20.04.1-Ubuntu SMP Wed Aug 7 20:43:22 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [f9b019e8ce66] <==
	W0829 18:10:18.238546       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0829 18:10:18.528308       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0829 18:10:18.735009       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0829 18:10:19.083246       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0829 18:18:54.326172       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 18:18:55.685302       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0829 18:18:56.757587       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0829 18:18:57.168048       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 18:18:58.182346       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 18:19:02.616300       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:19:02.824478       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.168.200"}
	I0829 18:19:14.429060       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.133.9"}
	I0829 18:19:22.320893       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:22.320942       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:19:22.333860       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:22.333907       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:19:22.334023       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:22.350102       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:22.350149       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:19:22.355892       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:19:22.355928       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 18:19:23.335219       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:19:23.356150       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 18:19:23.471053       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0829 18:19:24.581970       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.246.22"}
	
	
	==> kube-controller-manager [5c429e28082a] <==
	I0829 18:19:26.948587       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0829 18:19:27.743987       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-653578"
	W0829 18:19:27.875477       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:27.875512       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:19:28.403898       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0829 18:19:28.403936       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 18:19:28.622667       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0829 18:19:28.622702       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 18:19:28.811521       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I0829 18:19:29.337467       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="5.335µs"
	I0829 18:19:30.574582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="6.131446ms"
	I0829 18:19:30.574671       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="48.95µs"
	W0829 18:19:31.291534       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:31.291581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:32.539955       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:32.539993       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:32.724566       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:32.724610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:19:32.831048       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:32.831085       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:19:37.118264       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="7.397µs"
	I0829 18:19:39.457402       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0829 18:19:39.640764       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="8.072µs"
	W0829 18:19:40.095843       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:19:40.095892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9ba227f91ed4] <==
	I0829 18:07:02.227459       1 server_linux.go:66] "Using iptables proxy"
	I0829 18:07:02.825537       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0829 18:07:02.825614       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:07:03.219013       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0829 18:07:03.219131       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:07:03.221756       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:07:03.222161       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:07:03.222191       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:07:03.225659       1 config.go:197] "Starting service config controller"
	I0829 18:07:03.225685       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:07:03.225733       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:07:03.225740       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:07:03.225774       1 config.go:326] "Starting node config controller"
	I0829 18:07:03.225779       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:07:03.326757       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:07:03.326816       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:07:03.326858       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a1dddf242b55] <==
	W0829 18:06:51.215965       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:06:51.216053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:51.216004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:51.216082       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.021838       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:06:52.021876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.038438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:06:52.038482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.045653       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:06:52.045699       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.136441       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:06:52.136501       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.173816       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:52.173859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.178169       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0829 18:06:52.178212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.199850       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:06:52.199892       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.268797       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:52.268844       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.284117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:06:52.284163       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:06:52.421412       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:06:52.421452       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0829 18:06:54.435881       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:19:37 addons-653578 kubelet[2458]: I0829 18:19:37.650142    2458 scope.go:117] "RemoveContainer" containerID="cf3144f2dd89c098358c6fbb4581183267ced198527e8536dc27cc4a2ebb8f32"
	Aug 29 18:19:37 addons-653578 kubelet[2458]: E0829 18:19:37.650984    2458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: cf3144f2dd89c098358c6fbb4581183267ced198527e8536dc27cc4a2ebb8f32" containerID="cf3144f2dd89c098358c6fbb4581183267ced198527e8536dc27cc4a2ebb8f32"
	Aug 29 18:19:37 addons-653578 kubelet[2458]: I0829 18:19:37.651022    2458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"cf3144f2dd89c098358c6fbb4581183267ced198527e8536dc27cc4a2ebb8f32"} err="failed to get container status \"cf3144f2dd89c098358c6fbb4581183267ced198527e8536dc27cc4a2ebb8f32\": rpc error: code = Unknown desc = Error response from daemon: No such container: cf3144f2dd89c098358c6fbb4581183267ced198527e8536dc27cc4a2ebb8f32"
	Aug 29 18:19:39 addons-653578 kubelet[2458]: I0829 18:19:39.332011    2458 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e62d30f-ce80-44a7-983d-416423415c94" path="/var/lib/kubelet/pods/4e62d30f-ce80-44a7-983d-416423415c94/volumes"
	Aug 29 18:19:39 addons-653578 kubelet[2458]: I0829 18:19:39.372924    2458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1a4232d8-4ea0-43cf-be97-b0438297843b-gcp-creds\") pod \"1a4232d8-4ea0-43cf-be97-b0438297843b\" (UID: \"1a4232d8-4ea0-43cf-be97-b0438297843b\") "
	Aug 29 18:19:39 addons-653578 kubelet[2458]: I0829 18:19:39.372984    2458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn6nd\" (UniqueName: \"kubernetes.io/projected/1a4232d8-4ea0-43cf-be97-b0438297843b-kube-api-access-wn6nd\") pod \"1a4232d8-4ea0-43cf-be97-b0438297843b\" (UID: \"1a4232d8-4ea0-43cf-be97-b0438297843b\") "
	Aug 29 18:19:39 addons-653578 kubelet[2458]: I0829 18:19:39.373011    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a4232d8-4ea0-43cf-be97-b0438297843b-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1a4232d8-4ea0-43cf-be97-b0438297843b" (UID: "1a4232d8-4ea0-43cf-be97-b0438297843b"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 18:19:39 addons-653578 kubelet[2458]: I0829 18:19:39.374784    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a4232d8-4ea0-43cf-be97-b0438297843b-kube-api-access-wn6nd" (OuterVolumeSpecName: "kube-api-access-wn6nd") pod "1a4232d8-4ea0-43cf-be97-b0438297843b" (UID: "1a4232d8-4ea0-43cf-be97-b0438297843b"). InnerVolumeSpecName "kube-api-access-wn6nd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:39 addons-653578 kubelet[2458]: I0829 18:19:39.473526    2458 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1a4232d8-4ea0-43cf-be97-b0438297843b-gcp-creds\") on node \"addons-653578\" DevicePath \"\""
	Aug 29 18:19:39 addons-653578 kubelet[2458]: I0829 18:19:39.473565    2458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wn6nd\" (UniqueName: \"kubernetes.io/projected/1a4232d8-4ea0-43cf-be97-b0438297843b-kube-api-access-wn6nd\") on node \"addons-653578\" DevicePath \"\""
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.015020    2458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cw6l8\" (UniqueName: \"kubernetes.io/projected/a3caeea3-7234-42cc-b0fb-1182264d0d96-kube-api-access-cw6l8\") pod \"a3caeea3-7234-42cc-b0fb-1182264d0d96\" (UID: \"a3caeea3-7234-42cc-b0fb-1182264d0d96\") "
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.015075    2458 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l9s2\" (UniqueName: \"kubernetes.io/projected/b379348e-09dc-44aa-8751-a98fd763a638-kube-api-access-8l9s2\") pod \"b379348e-09dc-44aa-8751-a98fd763a638\" (UID: \"b379348e-09dc-44aa-8751-a98fd763a638\") "
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.017214    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3caeea3-7234-42cc-b0fb-1182264d0d96-kube-api-access-cw6l8" (OuterVolumeSpecName: "kube-api-access-cw6l8") pod "a3caeea3-7234-42cc-b0fb-1182264d0d96" (UID: "a3caeea3-7234-42cc-b0fb-1182264d0d96"). InnerVolumeSpecName "kube-api-access-cw6l8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.017222    2458 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b379348e-09dc-44aa-8751-a98fd763a638-kube-api-access-8l9s2" (OuterVolumeSpecName: "kube-api-access-8l9s2") pod "b379348e-09dc-44aa-8751-a98fd763a638" (UID: "b379348e-09dc-44aa-8751-a98fd763a638"). InnerVolumeSpecName "kube-api-access-8l9s2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.115632    2458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cw6l8\" (UniqueName: \"kubernetes.io/projected/a3caeea3-7234-42cc-b0fb-1182264d0d96-kube-api-access-cw6l8\") on node \"addons-653578\" DevicePath \"\""
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.115681    2458 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8l9s2\" (UniqueName: \"kubernetes.io/projected/b379348e-09dc-44aa-8751-a98fd763a638-kube-api-access-8l9s2\") on node \"addons-653578\" DevicePath \"\""
	Aug 29 18:19:40 addons-653578 kubelet[2458]: E0829 18:19:40.325220    2458 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="1f934aa2-9991-41cf-ba44-8bc4497190cd"
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.678079    2458 scope.go:117] "RemoveContainer" containerID="ee84a1b5365d81a4f89ba758849d1d67eca9a2df21312ed43b9221a4e00e03bb"
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.697006    2458 scope.go:117] "RemoveContainer" containerID="ee84a1b5365d81a4f89ba758849d1d67eca9a2df21312ed43b9221a4e00e03bb"
	Aug 29 18:19:40 addons-653578 kubelet[2458]: E0829 18:19:40.697943    2458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ee84a1b5365d81a4f89ba758849d1d67eca9a2df21312ed43b9221a4e00e03bb" containerID="ee84a1b5365d81a4f89ba758849d1d67eca9a2df21312ed43b9221a4e00e03bb"
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.697986    2458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ee84a1b5365d81a4f89ba758849d1d67eca9a2df21312ed43b9221a4e00e03bb"} err="failed to get container status \"ee84a1b5365d81a4f89ba758849d1d67eca9a2df21312ed43b9221a4e00e03bb\": rpc error: code = Unknown desc = Error response from daemon: No such container: ee84a1b5365d81a4f89ba758849d1d67eca9a2df21312ed43b9221a4e00e03bb"
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.698013    2458 scope.go:117] "RemoveContainer" containerID="b6df3bb807a63bd4c39483503d9c97995f1420127f332035cb930273dd0c1f90"
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.727004    2458 scope.go:117] "RemoveContainer" containerID="b6df3bb807a63bd4c39483503d9c97995f1420127f332035cb930273dd0c1f90"
	Aug 29 18:19:40 addons-653578 kubelet[2458]: E0829 18:19:40.727746    2458 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b6df3bb807a63bd4c39483503d9c97995f1420127f332035cb930273dd0c1f90" containerID="b6df3bb807a63bd4c39483503d9c97995f1420127f332035cb930273dd0c1f90"
	Aug 29 18:19:40 addons-653578 kubelet[2458]: I0829 18:19:40.727795    2458 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b6df3bb807a63bd4c39483503d9c97995f1420127f332035cb930273dd0c1f90"} err="failed to get container status \"b6df3bb807a63bd4c39483503d9c97995f1420127f332035cb930273dd0c1f90\": rpc error: code = Unknown desc = Error response from daemon: No such container: b6df3bb807a63bd4c39483503d9c97995f1420127f332035cb930273dd0c1f90"
	
	
	==> storage-provisioner [d20858846ac4] <==
	I0829 18:07:06.820221       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:07:06.917329       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:07:06.917375       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:07:06.927237       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:07:06.927455       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-653578_073045fd-200b-46d3-b276-aecba52e6170!
	I0829 18:07:06.928741       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"20b49a5e-d537-4afa-9679-be3618e8f315", APIVersion:"v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-653578_073045fd-200b-46d3-b276-aecba52e6170 became leader
	I0829 18:07:07.027813       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-653578_073045fd-200b-46d3-b276-aecba52e6170!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-653578 -n addons-653578
helpers_test.go:261: (dbg) Run:  kubectl --context addons-653578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-653578 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-653578 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-653578/192.168.49.2
	Start Time:       Thu, 29 Aug 2024 18:10:26 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t7kj6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-t7kj6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m15s                   default-scheduler  Successfully assigned default/busybox to addons-653578
	  Normal   Pulling    7m58s (x4 over 9m14s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m58s (x4 over 9m14s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m58s (x4 over 9m14s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m29s (x6 over 9m14s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x20 over 9m14s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.45s)

                                                
                                    

Test pass (322/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 30.35
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 12.26
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.97
21 TestBinaryMirror 0.73
22 TestOffline 80.8
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 212.32
29 TestAddons/serial/Volcano 39.73
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 21.62
35 TestAddons/parallel/InspektorGadget 10.58
36 TestAddons/parallel/MetricsServer 5.56
37 TestAddons/parallel/HelmTiller 11.79
39 TestAddons/parallel/CSI 53.77
40 TestAddons/parallel/Headlamp 18.39
41 TestAddons/parallel/CloudSpanner 5.47
42 TestAddons/parallel/LocalPath 54.88
43 TestAddons/parallel/NvidiaDevicePlugin 5.6
44 TestAddons/parallel/Yakd 11.78
45 TestAddons/StoppedEnableDisable 10.92
46 TestCertOptions 31.91
47 TestCertExpiration 225.98
48 TestDockerFlags 23.21
49 TestForceSystemdFlag 30.91
50 TestForceSystemdEnv 29.02
52 TestKVMDriverInstallOrUpdate 5.74
56 TestErrorSpam/setup 24.12
57 TestErrorSpam/start 0.59
58 TestErrorSpam/status 0.84
59 TestErrorSpam/pause 1.16
60 TestErrorSpam/unpause 1.34
61 TestErrorSpam/stop 10.82
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 65.37
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.83
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.32
73 TestFunctional/serial/CacheCmd/cache/add_local 1.42
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.2
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 40.48
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 0.97
84 TestFunctional/serial/LogsFileCmd 0.96
85 TestFunctional/serial/InvalidService 4.76
87 TestFunctional/parallel/ConfigCmd 0.34
88 TestFunctional/parallel/DashboardCmd 13.73
89 TestFunctional/parallel/DryRun 0.34
90 TestFunctional/parallel/InternationalLanguage 0.28
91 TestFunctional/parallel/StatusCmd 1.06
95 TestFunctional/parallel/ServiceCmdConnect 8.68
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 42.85
99 TestFunctional/parallel/SSHCmd 0.5
100 TestFunctional/parallel/CpCmd 1.85
101 TestFunctional/parallel/MySQL 24.06
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.49
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.26
111 TestFunctional/parallel/License 0.68
112 TestFunctional/parallel/ServiceCmd/DeployApp 10.18
113 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
114 TestFunctional/parallel/ProfileCmd/profile_list 0.36
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
116 TestFunctional/parallel/MountCmd/any-port 7.96
117 TestFunctional/parallel/MountCmd/specific-port 1.85
118 TestFunctional/parallel/ServiceCmd/List 0.52
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
121 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
122 TestFunctional/parallel/ServiceCmd/Format 0.45
123 TestFunctional/parallel/ServiceCmd/URL 0.41
124 TestFunctional/parallel/Version/short 0.05
125 TestFunctional/parallel/Version/components 0.68
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.49
128 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
130 TestFunctional/parallel/ImageCommands/ImageBuild 4.65
131 TestFunctional/parallel/ImageCommands/Setup 2.01
132 TestFunctional/parallel/DockerEnv/bash 0.88
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
134 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
135 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
136 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.77
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.22
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.48
145 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
146 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
147 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.03
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 103.94
161 TestMultiControlPlane/serial/DeployApp 5.6
162 TestMultiControlPlane/serial/PingHostFromPods 1
163 TestMultiControlPlane/serial/AddWorkerNode 20.05
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.64
166 TestMultiControlPlane/serial/CopyFile 15.19
167 TestMultiControlPlane/serial/StopSecondaryNode 11.42
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.47
169 TestMultiControlPlane/serial/RestartSecondaryNode 25.32
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 2.41
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 207.48
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.24
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.45
174 TestMultiControlPlane/serial/StopCluster 32.51
175 TestMultiControlPlane/serial/RestartCluster 84.64
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.45
177 TestMultiControlPlane/serial/AddSecondaryNode 37.34
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.61
181 TestImageBuild/serial/Setup 23.59
182 TestImageBuild/serial/NormalBuild 2.48
183 TestImageBuild/serial/BuildWithBuildArg 0.97
184 TestImageBuild/serial/BuildWithDockerIgnore 0.7
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.95
189 TestJSONOutput/start/Command 34.79
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.48
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.45
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.62
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.19
214 TestKicCustomNetwork/create_custom_network 26.79
215 TestKicCustomNetwork/use_default_bridge_network 23.05
216 TestKicExistingNetwork 22.29
217 TestKicCustomSubnet 23.29
218 TestKicStaticIP 22.79
219 TestMainNoArgs 0.04
220 TestMinikubeProfile 50.03
223 TestMountStart/serial/StartWithMountFirst 10.25
224 TestMountStart/serial/VerifyMountFirst 0.23
225 TestMountStart/serial/StartWithMountSecond 10.31
226 TestMountStart/serial/VerifyMountSecond 0.23
227 TestMountStart/serial/DeleteFirst 1.47
228 TestMountStart/serial/VerifyMountPostDelete 0.23
229 TestMountStart/serial/Stop 1.16
230 TestMountStart/serial/RestartStopped 8.58
231 TestMountStart/serial/VerifyMountPostStop 0.23
234 TestMultiNode/serial/FreshStart2Nodes 56.18
235 TestMultiNode/serial/DeployApp2Nodes 47.34
236 TestMultiNode/serial/PingHostFrom2Pods 0.68
237 TestMultiNode/serial/AddNode 18.86
238 TestMultiNode/serial/MultiNodeLabels 0.08
239 TestMultiNode/serial/ProfileList 0.36
240 TestMultiNode/serial/CopyFile 8.77
241 TestMultiNode/serial/StopNode 2.06
242 TestMultiNode/serial/StartAfterStop 9.83
243 TestMultiNode/serial/RestartKeepsNodes 97.04
244 TestMultiNode/serial/DeleteNode 5.19
245 TestMultiNode/serial/StopMultiNode 21.39
246 TestMultiNode/serial/RestartMultiNode 51.66
247 TestMultiNode/serial/ValidateNameConflict 22.88
252 TestPreload 106.08
254 TestScheduledStopUnix 94.45
255 TestSkaffold 101.02
257 TestInsufficientStorage 9.74
258 TestRunningBinaryUpgrade 72.19
260 TestKubernetesUpgrade 332.6
261 TestMissingContainerUpgrade 186.8
262 TestStoppedBinaryUpgrade/Setup 2.61
263 TestStoppedBinaryUpgrade/Upgrade 175.8
272 TestPause/serial/Start 40.32
273 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
276 TestNoKubernetes/serial/StartWithK8s 25.97
288 TestPause/serial/SecondStartNoReconfiguration 35.71
289 TestNoKubernetes/serial/StartWithStopK8s 8.74
290 TestNoKubernetes/serial/Start 7.78
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.25
292 TestNoKubernetes/serial/ProfileList 1.55
293 TestNoKubernetes/serial/Stop 1.19
294 TestNoKubernetes/serial/StartNoArgs 8.47
295 TestPause/serial/Pause 0.51
296 TestPause/serial/VerifyStatus 0.35
297 TestPause/serial/Unpause 0.48
298 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
299 TestPause/serial/PauseAgain 0.61
300 TestPause/serial/DeletePaused 2.3
301 TestPause/serial/VerifyDeletedResources 15.4
303 TestStartStop/group/old-k8s-version/serial/FirstStart 131.29
305 TestStartStop/group/no-preload/serial/FirstStart 67.75
307 TestStartStop/group/embed-certs/serial/FirstStart 67.13
308 TestStartStop/group/no-preload/serial/DeployApp 9.26
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.8
310 TestStartStop/group/no-preload/serial/Stop 10.62
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/no-preload/serial/SecondStart 262.94
313 TestStartStop/group/embed-certs/serial/DeployApp 8.24
314 TestStartStop/group/old-k8s-version/serial/DeployApp 8.41
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.8
316 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.84
317 TestStartStop/group/embed-certs/serial/Stop 10.67
318 TestStartStop/group/old-k8s-version/serial/Stop 10.95
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
320 TestStartStop/group/embed-certs/serial/SecondStart 266.5
321 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
322 TestStartStop/group/old-k8s-version/serial/SecondStart 140.75
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.97
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.25
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.81
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.73
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.41
330 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
332 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
333 TestStartStop/group/old-k8s-version/serial/Pause 2.45
335 TestStartStop/group/newest-cni/serial/FirstStart 31.81
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
338 TestStartStop/group/newest-cni/serial/Stop 9.53
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
340 TestStartStop/group/newest-cni/serial/SecondStart 14.6
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
344 TestStartStop/group/newest-cni/serial/Pause 2.27
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
346 TestNetworkPlugins/group/auto/Start 41.5
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.2
349 TestStartStop/group/no-preload/serial/Pause 2.38
350 TestNetworkPlugins/group/kindnet/Start 59.77
351 TestNetworkPlugins/group/auto/KubeletFlags 0.27
352 TestNetworkPlugins/group/auto/NetCatPod 10.18
353 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
354 TestNetworkPlugins/group/auto/DNS 16.54
355 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
356 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
357 TestStartStop/group/embed-certs/serial/Pause 2.42
358 TestNetworkPlugins/group/calico/Start 67.47
359 TestNetworkPlugins/group/auto/Localhost 0.13
360 TestNetworkPlugins/group/auto/HairPin 0.13
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
363 TestNetworkPlugins/group/kindnet/NetCatPod 10.22
364 TestNetworkPlugins/group/custom-flannel/Start 49
365 TestNetworkPlugins/group/kindnet/DNS 0.18
366 TestNetworkPlugins/group/kindnet/Localhost 0.11
367 TestNetworkPlugins/group/kindnet/HairPin 0.12
368 TestNetworkPlugins/group/false/Start 68.55
369 TestNetworkPlugins/group/calico/ControllerPod 6.01
370 TestNetworkPlugins/group/calico/KubeletFlags 0.26
371 TestNetworkPlugins/group/calico/NetCatPod 10.2
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.18
374 TestNetworkPlugins/group/calico/DNS 0.13
375 TestNetworkPlugins/group/calico/Localhost 0.1
376 TestNetworkPlugins/group/calico/HairPin 0.11
377 TestNetworkPlugins/group/custom-flannel/DNS 0.13
378 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
379 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
380 TestNetworkPlugins/group/enable-default-cni/Start 36.39
381 TestNetworkPlugins/group/flannel/Start 47.15
382 TestNetworkPlugins/group/false/KubeletFlags 0.29
383 TestNetworkPlugins/group/false/NetCatPod 10.23
384 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
385 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
386 TestNetworkPlugins/group/false/DNS 0.14
387 TestNetworkPlugins/group/false/Localhost 0.12
388 TestNetworkPlugins/group/false/HairPin 0.11
389 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
390 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.58
391 TestNetworkPlugins/group/bridge/Start 68.08
392 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
393 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.93
394 TestNetworkPlugins/group/kubenet/Start 68.22
395 TestNetworkPlugins/group/enable-default-cni/DNS 20.87
396 TestNetworkPlugins/group/flannel/ControllerPod 6.01
397 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
398 TestNetworkPlugins/group/flannel/NetCatPod 10.17
399 TestNetworkPlugins/group/flannel/DNS 0.13
400 TestNetworkPlugins/group/flannel/Localhost 0.12
401 TestNetworkPlugins/group/flannel/HairPin 0.11
402 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
403 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
404 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
405 TestNetworkPlugins/group/bridge/NetCatPod 9.2
406 TestNetworkPlugins/group/bridge/DNS 0.13
407 TestNetworkPlugins/group/bridge/Localhost 0.11
408 TestNetworkPlugins/group/bridge/HairPin 0.1
409 TestNetworkPlugins/group/kubenet/KubeletFlags 0.27
410 TestNetworkPlugins/group/kubenet/NetCatPod 10.19
411 TestNetworkPlugins/group/kubenet/DNS 0.13
412 TestNetworkPlugins/group/kubenet/Localhost 0.13
413 TestNetworkPlugins/group/kubenet/HairPin 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (30.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-749552 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-749552 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (30.351522059s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (30.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-749552
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-749552: exit status 85 (57.568404ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-749552 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |          |
	|         | -p download-only-749552        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:29
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:29.188408   19751 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:29.188655   19751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:29.188665   19751 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:29.188669   19751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:29.188884   19751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
	W0829 18:05:29.189051   19751 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19531-12929/.minikube/config/config.json: open /home/jenkins/minikube-integration/19531-12929/.minikube/config/config.json: no such file or directory
	I0829 18:05:29.189654   19751 out.go:352] Setting JSON to true
	I0829 18:05:29.190597   19751 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2875,"bootTime":1724951854,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:29.190651   19751 start.go:139] virtualization: kvm guest
	I0829 18:05:29.193252   19751 out.go:97] [download-only-749552] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0829 18:05:29.193351   19751 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19531-12929/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 18:05:29.193386   19751 notify.go:220] Checking for updates...
	I0829 18:05:29.194834   19751 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:29.196351   19751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:29.197679   19751 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	I0829 18:05:29.199047   19751 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	I0829 18:05:29.200798   19751 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:05:29.203769   19751 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:05:29.204023   19751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:29.225295   19751 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:05:29.225384   19751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:29.587052   19751 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:05:29.578514955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:29.587169   19751 docker.go:307] overlay module found
	I0829 18:05:29.589077   19751 out.go:97] Using the docker driver based on user configuration
	I0829 18:05:29.589109   19751 start.go:297] selected driver: docker
	I0829 18:05:29.589117   19751 start.go:901] validating driver "docker" against <nil>
	I0829 18:05:29.589208   19751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:29.636108   19751 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:05:29.627804649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:29.636302   19751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:05:29.636766   19751 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0829 18:05:29.636940   19751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:05:29.638808   19751 out.go:169] Using Docker driver with root privileges
	I0829 18:05:29.640072   19751 cni.go:84] Creating CNI manager for ""
	I0829 18:05:29.640097   19751 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0829 18:05:29.640211   19751 start.go:340] cluster config:
	{Name:download-only-749552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-749552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:05:29.641514   19751 out.go:97] Starting "download-only-749552" primary control-plane node in "download-only-749552" cluster
	I0829 18:05:29.641537   19751 cache.go:121] Beginning downloading kic base image for docker with docker
	I0829 18:05:29.642752   19751 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0829 18:05:29.642785   19751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:05:29.642875   19751 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0829 18:05:29.658258   19751 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:05:29.658443   19751 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0829 18:05:29.658544   19751 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:05:29.848106   19751 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0829 18:05:29.848131   19751 cache.go:56] Caching tarball of preloaded images
	I0829 18:05:29.848271   19751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:05:29.850211   19751 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0829 18:05:29.850224   19751 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:05:29.967391   19751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19531-12929/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0829 18:05:41.161583   19751 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:05:41.161668   19751 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19531-12929/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:05:41.925924   19751 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0829 18:05:41.926238   19751 profile.go:143] Saving config to /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/download-only-749552/config.json ...
	I0829 18:05:41.926270   19751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/download-only-749552/config.json: {Name:mka7ae10145d214a79adc9d2d281e639e2ba4069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 18:05:41.926429   19751 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 18:05:41.926592   19751 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19531-12929/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-749552 host does not exist
	  To start a cluster, run: "minikube start -p download-only-749552"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-749552
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (12.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-684343 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-684343 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (12.263626454s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (12.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-684343
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-684343: exit status 85 (58.444462ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-749552 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-749552        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| delete  | -p download-only-749552        | download-only-749552 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC | 29 Aug 24 18:05 UTC |
	| start   | -o=json --download-only        | download-only-684343 | jenkins | v1.33.1 | 29 Aug 24 18:05 UTC |                     |
	|         | -p download-only-684343        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 18:05:59
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 18:05:59.909531   20179 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:05:59.909737   20179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:59.909745   20179 out.go:358] Setting ErrFile to fd 2...
	I0829 18:05:59.909749   20179 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:05:59.909896   20179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
	I0829 18:05:59.910424   20179 out.go:352] Setting JSON to true
	I0829 18:05:59.911189   20179 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2906,"bootTime":1724951854,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:05:59.911248   20179 start.go:139] virtualization: kvm guest
	I0829 18:05:59.913225   20179 out.go:97] [download-only-684343] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:05:59.913363   20179 notify.go:220] Checking for updates...
	I0829 18:05:59.914756   20179 out.go:169] MINIKUBE_LOCATION=19531
	I0829 18:05:59.916197   20179 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:05:59.917485   20179 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	I0829 18:05:59.918707   20179 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	I0829 18:05:59.919833   20179 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0829 18:05:59.922303   20179 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 18:05:59.922493   20179 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:05:59.943816   20179 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:05:59.943929   20179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:05:59.986908   20179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:05:59.978277021 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:05:59.987009   20179 docker.go:307] overlay module found
	I0829 18:05:59.989035   20179 out.go:97] Using the docker driver based on user configuration
	I0829 18:05:59.989076   20179 start.go:297] selected driver: docker
	I0829 18:05:59.989085   20179 start.go:901] validating driver "docker" against <nil>
	I0829 18:05:59.989177   20179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:06:00.035447   20179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-29 18:06:00.027266692 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:06:00.035613   20179 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 18:06:00.036082   20179 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0829 18:06:00.036246   20179 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 18:06:00.038266   20179 out.go:169] Using Docker driver with root privileges
	I0829 18:06:00.039744   20179 cni.go:84] Creating CNI manager for ""
	I0829 18:06:00.039767   20179 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 18:06:00.039784   20179 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 18:06:00.039861   20179 start.go:340] cluster config:
	{Name:download-only-684343 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-684343 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:06:00.041298   20179 out.go:97] Starting "download-only-684343" primary control-plane node in "download-only-684343" cluster
	I0829 18:06:00.041322   20179 cache.go:121] Beginning downloading kic base image for docker with docker
	I0829 18:06:00.042481   20179 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0829 18:06:00.042509   20179 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:00.042630   20179 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0829 18:06:00.059681   20179 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0829 18:06:00.059826   20179 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0829 18:06:00.059841   20179 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0829 18:06:00.059846   20179 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0829 18:06:00.059853   20179 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0829 18:06:00.571103   20179 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0829 18:06:00.571140   20179 cache.go:56] Caching tarball of preloaded images
	I0829 18:06:00.571313   20179 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 18:06:00.573400   20179 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0829 18:06:00.573427   20179 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0829 18:06:00.693691   20179 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /home/jenkins/minikube-integration/19531-12929/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-684343 host does not exist
	  To start a cluster, run: "minikube start -p download-only-684343"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-684343
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.97s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-168864 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-168864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-168864
--- PASS: TestDownloadOnlyKic (0.97s)

                                                
                                    
x
+
TestBinaryMirror (0.73s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-314184 --alsologtostderr --binary-mirror http://127.0.0.1:39569 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-314184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-314184
--- PASS: TestBinaryMirror (0.73s)

                                                
                                    
x
+
TestOffline (80.8s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-549764 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-549764 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m18.409016758s)
helpers_test.go:175: Cleaning up "offline-docker-549764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-549764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-549764: (2.388876487s)
--- PASS: TestOffline (80.80s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-653578
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-653578: exit status 85 (46.289351ms)

                                                
                                                
-- stdout --
	* Profile "addons-653578" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-653578"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-653578
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-653578: exit status 85 (47.164638ms)

                                                
                                                
-- stdout --
	* Profile "addons-653578" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-653578"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (212.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-653578 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-653578 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m32.322149327s)
--- PASS: TestAddons/Setup (212.32s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.73s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 9.994941ms
addons_test.go:897: volcano-scheduler stabilized in 10.282383ms
addons_test.go:905: volcano-admission stabilized in 10.352843ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-rxv6t" [97f850b1-df9e-4059-8c81-590fa9df386a] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0040928s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-l6wn7" [3dff5723-100b-47f0-82c5-207408dba148] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003567442s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-kk7h9" [b00561b9-2dc9-44b7-bf56-17161cc4eb12] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003501155s
addons_test.go:932: (dbg) Run:  kubectl --context addons-653578 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-653578 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-653578 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e7bbae5d-c3cb-4167-9e3a-145a1b93dec8] Pending
helpers_test.go:344: "test-job-nginx-0" [e7bbae5d-c3cb-4167-9e3a-145a1b93dec8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e7bbae5d-c3cb-4167-9e3a-145a1b93dec8] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.00377495s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-653578 addons disable volcano --alsologtostderr -v=1: (10.364163876s)
--- PASS: TestAddons/serial/Volcano (39.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-653578 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-653578 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-653578 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-653578 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-653578 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [127b9cd0-a5f4-487c-9918-2ec463f3aec6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [127b9cd0-a5f4-487c-9918-2ec463f3aec6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003461952s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-653578 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-653578 addons disable ingress-dns --alsologtostderr -v=1: (1.601517121s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-653578 addons disable ingress --alsologtostderr -v=1: (7.630390232s)
--- PASS: TestAddons/parallel/Ingress (21.62s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.58s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dw8km" [142b7327-2fbf-4f97-9e33-1532149366f7] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00370639s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-653578
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-653578: (5.575729623s)
--- PASS: TestAddons/parallel/InspektorGadget (10.58s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.380125ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-hfkph" [98d84941-9a37-497f-92d4-aeb71bae507f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003797715s
addons_test.go:417: (dbg) Run:  kubectl --context addons-653578 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.187202ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-nzc4x" [97132718-1159-440c-9985-e5c297ed90f0] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.009316947s
addons_test.go:475: (dbg) Run:  kubectl --context addons-653578 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-653578 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.310611212s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.118042ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-653578 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-653578 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [de13545e-3969-4fce-bc0d-0f9ac0e3a849] Pending
helpers_test.go:344: "task-pv-pod" [de13545e-3969-4fce-bc0d-0f9ac0e3a849] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [de13545e-3969-4fce-bc0d-0f9ac0e3a849] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003659215s
addons_test.go:590: (dbg) Run:  kubectl --context addons-653578 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-653578 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-653578 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-653578 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-653578 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-653578 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-653578 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f4e13d04-cc01-4ad6-b3b9-65dde3ae6ee6] Pending
helpers_test.go:344: "task-pv-pod-restore" [f4e13d04-cc01-4ad6-b3b9-65dde3ae6ee6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f4e13d04-cc01-4ad6-b3b9-65dde3ae6ee6] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00395423s
addons_test.go:632: (dbg) Run:  kubectl --context addons-653578 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-653578 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-653578 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-653578 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.475729774s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-653578 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-hqkrj" [4e62d30f-ce80-44a7-983d-416423415c94] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-hqkrj" [4e62d30f-ce80-44a7-983d-416423415c94] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003658471s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-653578 addons disable headlamp --alsologtostderr -v=1: (5.635229564s)
--- PASS: TestAddons/parallel/Headlamp (18.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-9nw44" [9ab231c3-840b-4225-9f90-a4733344556c] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004167152s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-653578
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.88s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-653578 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-653578 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-653578 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [1478afd4-9ea9-4a46-8edf-c6042b1459b9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [1478afd4-9ea9-4a46-8edf-c6042b1459b9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [1478afd4-9ea9-4a46-8edf-c6042b1459b9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003711562s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-653578 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 ssh "cat /opt/local-path-provisioner/pvc-39f4fb16-19c9-473f-b1e5-a21f836e005c_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-653578 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-653578 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-amd64 -p addons-653578 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.015067766s)
--- PASS: TestAddons/parallel/LocalPath (54.88s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7bg4q" [dd96e728-76df-48ee-ade4-d42404749188] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003795924s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-653578
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-brht4" [d3620758-f8c0-4a27-86ed-4f128bb9edb7] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003130075s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-653578 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-653578 addons disable yakd --alsologtostderr -v=1: (5.779569469s)
--- PASS: TestAddons/parallel/Yakd (11.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-653578
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-653578: (10.695623825s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-653578
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-653578
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-653578
--- PASS: TestAddons/StoppedEnableDisable (10.92s)

                                                
                                    
x
+
TestCertOptions (31.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-784790 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-784790 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (26.958824335s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-784790 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0829 18:53:41.479432   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-784790 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-784790 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-784790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-784790
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-784790: (4.346909412s)
--- PASS: TestCertOptions (31.91s)

                                                
                                    
x
+
TestCertExpiration (225.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-953144 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0829 18:53:11.584415   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-953144 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (23.320661641s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-953144 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-953144 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.527220956s)
helpers_test.go:175: Cleaning up "cert-expiration-953144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-953144
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-953144: (2.128735148s)
--- PASS: TestCertExpiration (225.98s)

                                                
                                    
x
+
TestDockerFlags (23.21s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-396166 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-396166 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (20.643305016s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-396166 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-396166 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-396166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-396166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-396166: (2.069379124s)
--- PASS: TestDockerFlags (23.21s)

                                                
                                    
x
+
TestForceSystemdFlag (30.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-595407 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-595407 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.369801779s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-595407 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-595407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-595407
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-595407: (2.207656743s)
--- PASS: TestForceSystemdFlag (30.91s)

                                                
                                    
x
+
TestForceSystemdEnv (29.02s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-024979 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-024979 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.493544177s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-024979 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-024979" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-024979
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-024979: (3.141839079s)
--- PASS: TestForceSystemdEnv (29.02s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.74s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (5.74s)

                                                
                                    
x
+
TestErrorSpam/setup (24.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-466668 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-466668 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-466668 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-466668 --driver=docker  --container-runtime=docker: (24.120633679s)
--- PASS: TestErrorSpam/setup (24.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 pause
--- PASS: TestErrorSpam/pause (1.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 unpause
--- PASS: TestErrorSpam/unpause (1.34s)

                                                
                                    
x
+
TestErrorSpam/stop (10.82s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 stop: (10.642928578s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-466668 --log_dir /tmp/nospam-466668 stop
--- PASS: TestErrorSpam/stop (10.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19531-12929/.minikube/files/etc/test/nested/copy/19739/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.37s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-995951 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-995951 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m5.365062017s)
--- PASS: TestFunctional/serial/StartWithProxy (65.37s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.83s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-995951 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-995951 --alsologtostderr -v=8: (36.826714059s)
functional_test.go:663: soft start took 36.827406939s for "functional-995951" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.83s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-995951 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-995951 /tmp/TestFunctionalserialCacheCmdcacheadd_local1184687675/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 cache add minikube-local-cache-test:functional-995951
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-995951 cache add minikube-local-cache-test:functional-995951: (1.105070164s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 cache delete minikube-local-cache-test:functional-995951
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-995951
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-995951 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (256.808796ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 kubectl -- --context functional-995951 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-995951 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-995951 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-995951 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.477095569s)
functional_test.go:761: restart took 40.477327489s for "functional-995951" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-995951 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 logs
--- PASS: TestFunctional/serial/LogsCmd (0.97s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 logs --file /tmp/TestFunctionalserialLogsFileCmd4270224438/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.76s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-995951 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-995951
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-995951: exit status 115 (317.19376ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31633 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-995951 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-995951 delete -f testdata/invalidsvc.yaml: (1.275661595s)
--- PASS: TestFunctional/serial/InvalidService (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-995951 config get cpus: exit status 14 (75.14244ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-995951 config get cpus: exit status 14 (47.602035ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-995951 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-995951 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 71827: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-995951 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-995951 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (151.74756ms)

                                                
                                                
-- stdout --
	* [functional-995951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:23:14.822331   71291 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:23:14.822491   71291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:23:14.822504   71291 out.go:358] Setting ErrFile to fd 2...
	I0829 18:23:14.822511   71291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:23:14.822841   71291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
	I0829 18:23:14.823582   71291 out.go:352] Setting JSON to false
	I0829 18:23:14.825047   71291 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3941,"bootTime":1724951854,"procs":341,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:23:14.825157   71291 start.go:139] virtualization: kvm guest
	I0829 18:23:14.827509   71291 out.go:177] * [functional-995951] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0829 18:23:14.829195   71291 notify.go:220] Checking for updates...
	I0829 18:23:14.829245   71291 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:23:14.830648   71291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:23:14.832115   71291 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	I0829 18:23:14.833485   71291 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	I0829 18:23:14.835272   71291 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:23:14.836736   71291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:23:14.838523   71291 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:23:14.839207   71291 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:23:14.863875   71291 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:23:14.863992   71291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:23:14.910951   71291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:23:14.89996941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:23:14.911100   71291 docker.go:307] overlay module found
	I0829 18:23:14.912811   71291 out.go:177] * Using the docker driver based on existing profile
	I0829 18:23:14.914015   71291 start.go:297] selected driver: docker
	I0829 18:23:14.914032   71291 start.go:901] validating driver "docker" against &{Name:functional-995951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-995951 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:23:14.914124   71291 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:23:14.916141   71291 out.go:201] 
	W0829 18:23:14.917338   71291 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 18:23:14.918529   71291 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-995951 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-995951 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-995951 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (282.143818ms)

                                                
                                                
-- stdout --
	* [functional-995951] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:23:14.533758   71126 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:23:14.533899   71126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:23:14.533912   71126 out.go:358] Setting ErrFile to fd 2...
	I0829 18:23:14.533919   71126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:23:14.534300   71126 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
	I0829 18:23:14.535952   71126 out.go:352] Setting JSON to false
	I0829 18:23:14.537098   71126 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3941,"bootTime":1724951854,"procs":338,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0829 18:23:14.537183   71126 start.go:139] virtualization: kvm guest
	I0829 18:23:14.539463   71126 out.go:177] * [functional-995951] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0829 18:23:14.541383   71126 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 18:23:14.541430   71126 notify.go:220] Checking for updates...
	I0829 18:23:14.544906   71126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 18:23:14.546906   71126 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	I0829 18:23:14.550908   71126 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	I0829 18:23:14.583578   71126 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0829 18:23:14.591246   71126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 18:23:14.602778   71126 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:23:14.603468   71126 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 18:23:14.626626   71126 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0829 18:23:14.626722   71126 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:23:14.677323   71126 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-29 18:23:14.666135399 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:23:14.677418   71126 docker.go:307] overlay module found
	I0829 18:23:14.730478   71126 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0829 18:23:14.748465   71126 start.go:297] selected driver: docker
	I0829 18:23:14.748488   71126 start.go:901] validating driver "docker" against &{Name:functional-995951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-995951 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 18:23:14.748624   71126 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 18:23:14.762973   71126 out.go:201] 
	W0829 18:23:14.764611   71126 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 18:23:14.767208   71126 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-995951 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-995951 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-287t6" [eeb27edb-c8ce-40c8-8a7a-e7f7c37c1c00] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-287t6" [eeb27edb-c8ce-40c8-8a7a-e7f7c37c1c00] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003738872s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31474
functional_test.go:1675: http://192.168.49.2:31474: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-287t6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31474
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9734385c-fb73-4340-be7c-045a8af5f6d2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00359231s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-995951 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-995951 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-995951 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-995951 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1c37e646-6d61-47df-8c59-ebb3c24a1b3f] Pending
helpers_test.go:344: "sp-pod" [1c37e646-6d61-47df-8c59-ebb3c24a1b3f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1c37e646-6d61-47df-8c59-ebb3c24a1b3f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003677871s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-995951 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-995951 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-995951 delete -f testdata/storage-provisioner/pod.yaml: (2.030345031s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-995951 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [58f3821b-bc06-4edd-b23d-f3a1641ed858] Pending
helpers_test.go:344: "sp-pod" [58f3821b-bc06-4edd-b23d-f3a1641ed858] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [58f3821b-bc06-4edd-b23d-f3a1641ed858] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004091942s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-995951 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.85s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh -n functional-995951 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 cp functional-995951:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4000396319/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh -n functional-995951 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh -n functional-995951 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-995951 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-l9tfm" [f67857a7-32c6-4a80-9b06-b56b393f08d3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-l9tfm" [f67857a7-32c6-4a80-9b06-b56b393f08d3] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.004284273s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-995951 exec mysql-6cdb49bbb-l9tfm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-995951 exec mysql-6cdb49bbb-l9tfm -- mysql -ppassword -e "show databases;": exit status 1 (107.620381ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-995951 exec mysql-6cdb49bbb-l9tfm -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-995951 exec mysql-6cdb49bbb-l9tfm -- mysql -ppassword -e "show databases;": exit status 1 (104.960909ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-995951 exec mysql-6cdb49bbb-l9tfm -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/19739/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo cat /etc/test/nested/copy/19739/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/19739.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo cat /etc/ssl/certs/19739.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/19739.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo cat /usr/share/ca-certificates/19739.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/197392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo cat /etc/ssl/certs/197392.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/197392.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo cat /usr/share/ca-certificates/197392.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-995951 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-995951 ssh "sudo systemctl is-active crio": exit status 1 (264.586293ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-995951 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-995951 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-b9rfk" [2301a37f-cd18-476d-9cc3-b98adf230c81] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-b9rfk" [2301a37f-cd18-476d-9cc3-b98adf230c81] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004209614s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "307.075172ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "54.50951ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "369.676809ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "56.542528ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdany-port1312917230/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724955793085327375" to /tmp/TestFunctionalparallelMountCmdany-port1312917230/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724955793085327375" to /tmp/TestFunctionalparallelMountCmdany-port1312917230/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724955793085327375" to /tmp/TestFunctionalparallelMountCmdany-port1312917230/001/test-1724955793085327375
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (290.052354ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 29 18:23 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 29 18:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 29 18:23 test-1724955793085327375
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh cat /mount-9p/test-1724955793085327375
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-995951 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [813f26d8-ae9b-4599-9ea5-9c3b3cc9df0c] Pending
helpers_test.go:344: "busybox-mount" [813f26d8-ae9b-4599-9ea5-9c3b3cc9df0c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [813f26d8-ae9b-4599-9ea5-9c3b3cc9df0c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [813f26d8-ae9b-4599-9ea5-9c3b3cc9df0c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.002814225s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-995951 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdany-port1312917230/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdspecific-port1952151639/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (278.294842ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdspecific-port1952151639/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-995951 ssh "sudo umount -f /mount-9p": exit status 1 (277.138427ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-995951 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdspecific-port1952151639/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 service list -o json
functional_test.go:1494: Took "507.272445ms" to run "out/minikube-linux-amd64 -p functional-995951 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30209
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097462999/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097462999/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097462999/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T" /mount1: exit status 1 (374.645559ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-995951 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097462999/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097462999/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-995951 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3097462999/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30209
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-995951 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-995951
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-995951
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-995951 image ls --format short --alsologtostderr:
I0829 18:23:37.910282   77969 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:37.910401   77969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:37.910409   77969 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:37.910413   77969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:37.910599   77969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
I0829 18:23:37.911112   77969 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:37.911202   77969 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:37.911553   77969 cli_runner.go:164] Run: docker container inspect functional-995951 --format={{.State.Status}}
I0829 18:23:37.938139   77969 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:37.938206   77969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995951
I0829 18:23:37.958822   77969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/functional-995951/id_rsa Username:docker}
I0829 18:23:38.119014   77969 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-995951 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kicbase/echo-server               | functional-995951 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-995951 | b2260b5bdacf1 | 30B    |
| docker.io/library/nginx                     | alpine            | 0f0eda053dc5c | 43.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-995951 image ls --format table --alsologtostderr:
I0829 18:23:42.009346   78561 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:42.009462   78561 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:42.009471   78561 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:42.009475   78561 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:42.009646   78561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
I0829 18:23:42.010209   78561 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:42.010306   78561 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:42.010666   78561 cli_runner.go:164] Run: docker container inspect functional-995951 --format={{.State.Status}}
I0829 18:23:42.031701   78561 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:42.031762   78561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995951
I0829 18:23:42.057743   78561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/functional-995951/id_rsa Username:docker}
I0829 18:23:42.418645   78561 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-995951 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"b2260b5bdacf14edd84be59b5b06937b07ca2e21f2fef2269dff289e4f72dc2b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-995951"],"size":"30"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779
741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s
.io/echoserver:1.8"],"size":"95400000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-995951"],"size":"4940000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size
":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-995951 image ls --format json --alsologtostderr:
I0829 18:23:41.801928   78501 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:41.802035   78501 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:41.802043   78501 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:41.802048   78501 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:41.802242   78501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
I0829 18:23:41.802784   78501 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:41.802898   78501 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:41.803224   78501 cli_runner.go:164] Run: docker container inspect functional-995951 --format={{.State.Status}}
I0829 18:23:41.822436   78501 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:41.822492   78501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995951
I0829 18:23:41.843350   78501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/functional-995951/id_rsa Username:docker}
I0829 18:23:41.934700   78501 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-995951 image ls --format yaml --alsologtostderr:
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: b2260b5bdacf14edd84be59b5b06937b07ca2e21f2fef2269dff289e4f72dc2b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-995951
size: "30"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-995951
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-995951 image ls --format yaml --alsologtostderr:
I0829 18:23:38.194122   78033 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:38.194393   78033 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:38.194403   78033 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:38.194407   78033 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:38.195046   78033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
I0829 18:23:38.196302   78033 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:38.196417   78033 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:38.196773   78033 cli_runner.go:164] Run: docker container inspect functional-995951 --format={{.State.Status}}
I0829 18:23:38.214102   78033 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:38.214162   78033 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995951
I0829 18:23:38.234871   78033 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/functional-995951/id_rsa Username:docker}
I0829 18:23:38.365937   78033 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-995951 ssh pgrep buildkitd: exit status 1 (244.315225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image build -t localhost/my-image:functional-995951 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-995951 image build -t localhost/my-image:functional-995951 testdata/build --alsologtostderr: (4.1971201s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-995951 image build -t localhost/my-image:functional-995951 testdata/build --alsologtostderr:
I0829 18:23:38.738760   78167 out.go:345] Setting OutFile to fd 1 ...
I0829 18:23:38.738921   78167 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:38.738932   78167 out.go:358] Setting ErrFile to fd 2...
I0829 18:23:38.738938   78167 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 18:23:38.739173   78167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
I0829 18:23:38.739725   78167 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:38.740380   78167 config.go:182] Loaded profile config "functional-995951": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 18:23:38.740810   78167 cli_runner.go:164] Run: docker container inspect functional-995951 --format={{.State.Status}}
I0829 18:23:38.760326   78167 ssh_runner.go:195] Run: systemctl --version
I0829 18:23:38.760404   78167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-995951
I0829 18:23:38.778993   78167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/functional-995951/id_rsa Username:docker}
I0829 18:23:38.874334   78167 build_images.go:161] Building image from path: /tmp/build.4147144573.tar
I0829 18:23:38.874420   78167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0829 18:23:38.883998   78167 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4147144573.tar
I0829 18:23:38.887451   78167 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4147144573.tar: stat -c "%s %y" /var/lib/minikube/build/build.4147144573.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4147144573.tar': No such file or directory
I0829 18:23:38.887476   78167 ssh_runner.go:362] scp /tmp/build.4147144573.tar --> /var/lib/minikube/build/build.4147144573.tar (3072 bytes)
I0829 18:23:38.924356   78167 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4147144573
I0829 18:23:38.934886   78167 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4147144573 -xf /var/lib/minikube/build/build.4147144573.tar
I0829 18:23:38.944512   78167 docker.go:360] Building image: /var/lib/minikube/build/build.4147144573
I0829 18:23:38.944607   78167 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-995951 /var/lib/minikube/build/build.4147144573
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:dbf2f8e3c7a2f00b89f8a0f517bb3af5601e81a43525ab87acec6e594fb9ae81 done
#8 naming to localhost/my-image:functional-995951 done
#8 DONE 0.0s
I0829 18:23:42.861277   78167 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-995951 /var/lib/minikube/build/build.4147144573: (3.916638019s)
I0829 18:23:42.861358   78167 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4147144573
I0829 18:23:42.870742   78167 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4147144573.tar
I0829 18:23:42.880073   78167 build_images.go:217] Built localhost/my-image:functional-995951 from /tmp/build.4147144573.tar
I0829 18:23:42.880109   78167 build_images.go:133] succeeded building to: functional-995951
I0829 18:23:42.880116   78167 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.993001793s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-995951
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-995951 docker-env) && out/minikube-linux-amd64 status -p functional-995951"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-995951 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image load --daemon kicbase/echo-server:functional-995951 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image load --daemon kicbase/echo-server:functional-995951 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
2024/08/29 18:23:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-995951
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image load --daemon kicbase/echo-server:functional-995951 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-995951 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-995951 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-995951 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 76006: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-995951 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-995951 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-995951 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3e4e1183-d838-4acb-b9c1-6b7da8ce7666] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3e4e1183-d838-4acb-b9c1-6b7da8ce7666] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.00387105s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image save kicbase/echo-server:functional-995951 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image rm kicbase/echo-server:functional-995951 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-995951
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-995951 image save --daemon kicbase/echo-server:functional-995951 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-995951
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-995951 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.21.108 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-995951 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-995951
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-995951
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-995951
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (103.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-774784 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0829 18:24:46.831446   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:46.838551   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:46.849979   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:46.871446   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:46.912884   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:46.994346   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:47.155841   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:47.477488   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:48.118955   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:49.400478   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:51.961872   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:24:57.084129   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:25:07.325819   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:25:27.807802   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-774784 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m43.277447795s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (103.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-774784 -- rollout status deployment/busybox: (3.774355507s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-8l4zv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-rt72v -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-stzct -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-8l4zv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-rt72v -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-stzct -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-8l4zv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-rt72v -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-stzct -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-8l4zv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-8l4zv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-rt72v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-rt72v -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-stzct -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-774784 -- exec busybox-7dff88458-stzct -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-774784 -v=7 --alsologtostderr
E0829 18:26:08.769573   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-774784 -v=7 --alsologtostderr: (19.244504177s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-774784 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp testdata/cp-test.txt ha-774784:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2721636001/001/cp-test_ha-774784.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784:/home/docker/cp-test.txt ha-774784-m02:/home/docker/cp-test_ha-774784_ha-774784-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m02 "sudo cat /home/docker/cp-test_ha-774784_ha-774784-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784:/home/docker/cp-test.txt ha-774784-m03:/home/docker/cp-test_ha-774784_ha-774784-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m03 "sudo cat /home/docker/cp-test_ha-774784_ha-774784-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784:/home/docker/cp-test.txt ha-774784-m04:/home/docker/cp-test_ha-774784_ha-774784-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m04 "sudo cat /home/docker/cp-test_ha-774784_ha-774784-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp testdata/cp-test.txt ha-774784-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2721636001/001/cp-test_ha-774784-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m02:/home/docker/cp-test.txt ha-774784:/home/docker/cp-test_ha-774784-m02_ha-774784.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784 "sudo cat /home/docker/cp-test_ha-774784-m02_ha-774784.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m02:/home/docker/cp-test.txt ha-774784-m03:/home/docker/cp-test_ha-774784-m02_ha-774784-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m03 "sudo cat /home/docker/cp-test_ha-774784-m02_ha-774784-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m02:/home/docker/cp-test.txt ha-774784-m04:/home/docker/cp-test_ha-774784-m02_ha-774784-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m04 "sudo cat /home/docker/cp-test_ha-774784-m02_ha-774784-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp testdata/cp-test.txt ha-774784-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2721636001/001/cp-test_ha-774784-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m03:/home/docker/cp-test.txt ha-774784:/home/docker/cp-test_ha-774784-m03_ha-774784.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784 "sudo cat /home/docker/cp-test_ha-774784-m03_ha-774784.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m03:/home/docker/cp-test.txt ha-774784-m02:/home/docker/cp-test_ha-774784-m03_ha-774784-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m02 "sudo cat /home/docker/cp-test_ha-774784-m03_ha-774784-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m03:/home/docker/cp-test.txt ha-774784-m04:/home/docker/cp-test_ha-774784-m03_ha-774784-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m04 "sudo cat /home/docker/cp-test_ha-774784-m03_ha-774784-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp testdata/cp-test.txt ha-774784-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2721636001/001/cp-test_ha-774784-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m04:/home/docker/cp-test.txt ha-774784:/home/docker/cp-test_ha-774784-m04_ha-774784.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784 "sudo cat /home/docker/cp-test_ha-774784-m04_ha-774784.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m04:/home/docker/cp-test.txt ha-774784-m02:/home/docker/cp-test_ha-774784-m04_ha-774784-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m02 "sudo cat /home/docker/cp-test_ha-774784-m04_ha-774784-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 cp ha-774784-m04:/home/docker/cp-test.txt ha-774784-m03:/home/docker/cp-test_ha-774784-m04_ha-774784-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 ssh -n ha-774784-m03 "sudo cat /home/docker/cp-test_ha-774784-m04_ha-774784-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-774784 node stop m02 -v=7 --alsologtostderr: (10.775811652s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr: exit status 7 (639.045195ms)

                                                
                                                
-- stdout --
	ha-774784
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-774784-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-774784-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-774784-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:26:36.900882  106278 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:26:36.901157  106278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:26:36.901167  106278 out.go:358] Setting ErrFile to fd 2...
	I0829 18:26:36.901172  106278 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:26:36.901356  106278 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
	I0829 18:26:36.901532  106278 out.go:352] Setting JSON to false
	I0829 18:26:36.901556  106278 mustload.go:65] Loading cluster: ha-774784
	I0829 18:26:36.901678  106278 notify.go:220] Checking for updates...
	I0829 18:26:36.902141  106278 config.go:182] Loaded profile config "ha-774784": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:26:36.902171  106278 status.go:255] checking status of ha-774784 ...
	I0829 18:26:36.902640  106278 cli_runner.go:164] Run: docker container inspect ha-774784 --format={{.State.Status}}
	I0829 18:26:36.919740  106278 status.go:330] ha-774784 host status = "Running" (err=<nil>)
	I0829 18:26:36.919773  106278 host.go:66] Checking if "ha-774784" exists ...
	I0829 18:26:36.920089  106278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-774784
	I0829 18:26:36.940934  106278 host.go:66] Checking if "ha-774784" exists ...
	I0829 18:26:36.941253  106278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:26:36.941298  106278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-774784
	I0829 18:26:36.958746  106278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/ha-774784/id_rsa Username:docker}
	I0829 18:26:37.050546  106278 ssh_runner.go:195] Run: systemctl --version
	I0829 18:26:37.054520  106278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:26:37.064820  106278 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:26:37.113183  106278 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-08-29 18:26:37.103423762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:26:37.113785  106278 kubeconfig.go:125] found "ha-774784" server: "https://192.168.49.254:8443"
	I0829 18:26:37.113819  106278 api_server.go:166] Checking apiserver status ...
	I0829 18:26:37.113861  106278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:26:37.124547  106278 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2328/cgroup
	I0829 18:26:37.133131  106278 api_server.go:182] apiserver freezer: "13:freezer:/docker/9e11c55ba9b1401d9336b1fc61e3a7b86ad6e3a94641addb92c246f1aec1682c/kubepods/burstable/pod1c121637dab0fabcc7b9a99fc2951547/b9d088da51b54cece528340dcd4c6f5a8d946ca75cc6b0d08086095f00b9d97c"
	I0829 18:26:37.133209  106278 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9e11c55ba9b1401d9336b1fc61e3a7b86ad6e3a94641addb92c246f1aec1682c/kubepods/burstable/pod1c121637dab0fabcc7b9a99fc2951547/b9d088da51b54cece528340dcd4c6f5a8d946ca75cc6b0d08086095f00b9d97c/freezer.state
	I0829 18:26:37.140892  106278 api_server.go:204] freezer state: "THAWED"
	I0829 18:26:37.140917  106278 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0829 18:26:37.144489  106278 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0829 18:26:37.144510  106278 status.go:422] ha-774784 apiserver status = Running (err=<nil>)
	I0829 18:26:37.144520  106278 status.go:257] ha-774784 status: &{Name:ha-774784 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:26:37.144538  106278 status.go:255] checking status of ha-774784-m02 ...
	I0829 18:26:37.144775  106278 cli_runner.go:164] Run: docker container inspect ha-774784-m02 --format={{.State.Status}}
	I0829 18:26:37.161160  106278 status.go:330] ha-774784-m02 host status = "Stopped" (err=<nil>)
	I0829 18:26:37.161181  106278 status.go:343] host is not running, skipping remaining checks
	I0829 18:26:37.161187  106278 status.go:257] ha-774784-m02 status: &{Name:ha-774784-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:26:37.161210  106278 status.go:255] checking status of ha-774784-m03 ...
	I0829 18:26:37.161469  106278 cli_runner.go:164] Run: docker container inspect ha-774784-m03 --format={{.State.Status}}
	I0829 18:26:37.178713  106278 status.go:330] ha-774784-m03 host status = "Running" (err=<nil>)
	I0829 18:26:37.178736  106278 host.go:66] Checking if "ha-774784-m03" exists ...
	I0829 18:26:37.178983  106278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-774784-m03
	I0829 18:26:37.195896  106278 host.go:66] Checking if "ha-774784-m03" exists ...
	I0829 18:26:37.196323  106278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:26:37.196364  106278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-774784-m03
	I0829 18:26:37.212514  106278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/ha-774784-m03/id_rsa Username:docker}
	I0829 18:26:37.302393  106278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:26:37.312728  106278 kubeconfig.go:125] found "ha-774784" server: "https://192.168.49.254:8443"
	I0829 18:26:37.312754  106278 api_server.go:166] Checking apiserver status ...
	I0829 18:26:37.312791  106278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:26:37.322996  106278 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2238/cgroup
	I0829 18:26:37.331184  106278 api_server.go:182] apiserver freezer: "13:freezer:/docker/211d4a6d144fd45006ecfcb016791dbf9aed04ef42fc71965730d18bfb591f96/kubepods/burstable/pod6d6762542ff67b9d8913f1b0f95d2696/77c99d3ebc2e421331409225269f30c7f4fa4edb04fee2443f8c83c08cc3d8aa"
	I0829 18:26:37.331240  106278 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/211d4a6d144fd45006ecfcb016791dbf9aed04ef42fc71965730d18bfb591f96/kubepods/burstable/pod6d6762542ff67b9d8913f1b0f95d2696/77c99d3ebc2e421331409225269f30c7f4fa4edb04fee2443f8c83c08cc3d8aa/freezer.state
	I0829 18:26:37.338729  106278 api_server.go:204] freezer state: "THAWED"
	I0829 18:26:37.338752  106278 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0829 18:26:37.342236  106278 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0829 18:26:37.342255  106278 status.go:422] ha-774784-m03 apiserver status = Running (err=<nil>)
	I0829 18:26:37.342263  106278 status.go:257] ha-774784-m03 status: &{Name:ha-774784-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:26:37.342284  106278 status.go:255] checking status of ha-774784-m04 ...
	I0829 18:26:37.342493  106278 cli_runner.go:164] Run: docker container inspect ha-774784-m04 --format={{.State.Status}}
	I0829 18:26:37.360666  106278 status.go:330] ha-774784-m04 host status = "Running" (err=<nil>)
	I0829 18:26:37.360686  106278 host.go:66] Checking if "ha-774784-m04" exists ...
	I0829 18:26:37.360975  106278 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-774784-m04
	I0829 18:26:37.378964  106278 host.go:66] Checking if "ha-774784-m04" exists ...
	I0829 18:26:37.379277  106278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:26:37.379333  106278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-774784-m04
	I0829 18:26:37.397860  106278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/ha-774784-m04/id_rsa Username:docker}
	I0829 18:26:37.486578  106278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:26:37.496636  106278 status.go:257] ha-774784-m04 status: &{Name:ha-774784-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (25.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-774784 node start m02 -v=7 --alsologtostderr: (24.175184283s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr: (1.061547492s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (25.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.409779983s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (2.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (207.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-774784 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-774784 -v=7 --alsologtostderr
E0829 18:27:30.691750   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-774784 -v=7 --alsologtostderr: (33.585225921s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-774784 --wait=true -v=7 --alsologtostderr
E0829 18:28:11.585236   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:11.591631   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:11.602999   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:11.624356   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:11.665785   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:11.747215   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:11.908768   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:12.230481   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:12.872508   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:14.153870   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:16.715822   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:21.837758   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:32.079068   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:28:52.560932   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:33.522428   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:29:46.831302   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:30:14.534055   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-774784 --wait=true -v=7 --alsologtostderr: (2m53.803325845s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-774784
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (207.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-774784 node delete m03 -v=7 --alsologtostderr: (8.511559216s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 stop -v=7 --alsologtostderr
E0829 18:30:55.444131   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-774784 stop -v=7 --alsologtostderr: (32.412546785s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr: exit status 7 (96.184019ms)

                                                
                                                
-- stdout --
	ha-774784
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-774784-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-774784-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:31:15.311982  135713 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:31:15.312203  135713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:31:15.312211  135713 out.go:358] Setting ErrFile to fd 2...
	I0829 18:31:15.312215  135713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:31:15.312384  135713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
	I0829 18:31:15.312537  135713 out.go:352] Setting JSON to false
	I0829 18:31:15.312560  135713 mustload.go:65] Loading cluster: ha-774784
	I0829 18:31:15.312689  135713 notify.go:220] Checking for updates...
	I0829 18:31:15.312894  135713 config.go:182] Loaded profile config "ha-774784": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:31:15.312909  135713 status.go:255] checking status of ha-774784 ...
	I0829 18:31:15.313274  135713 cli_runner.go:164] Run: docker container inspect ha-774784 --format={{.State.Status}}
	I0829 18:31:15.333009  135713 status.go:330] ha-774784 host status = "Stopped" (err=<nil>)
	I0829 18:31:15.333033  135713 status.go:343] host is not running, skipping remaining checks
	I0829 18:31:15.333043  135713 status.go:257] ha-774784 status: &{Name:ha-774784 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:31:15.333072  135713 status.go:255] checking status of ha-774784-m02 ...
	I0829 18:31:15.333403  135713 cli_runner.go:164] Run: docker container inspect ha-774784-m02 --format={{.State.Status}}
	I0829 18:31:15.350478  135713 status.go:330] ha-774784-m02 host status = "Stopped" (err=<nil>)
	I0829 18:31:15.350503  135713 status.go:343] host is not running, skipping remaining checks
	I0829 18:31:15.350512  135713 status.go:257] ha-774784-m02 status: &{Name:ha-774784-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:31:15.350532  135713 status.go:255] checking status of ha-774784-m04 ...
	I0829 18:31:15.350785  135713 cli_runner.go:164] Run: docker container inspect ha-774784-m04 --format={{.State.Status}}
	I0829 18:31:15.367628  135713 status.go:330] ha-774784-m04 host status = "Stopped" (err=<nil>)
	I0829 18:31:15.367648  135713 status.go:343] host is not running, skipping remaining checks
	I0829 18:31:15.367655  135713 status.go:257] ha-774784-m04 status: &{Name:ha-774784-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (84.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-774784 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-774784 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m23.899582807s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (84.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-774784 --control-plane -v=7 --alsologtostderr
E0829 18:33:11.584093   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-774784 --control-plane -v=7 --alsologtostderr: (36.543490128s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-774784 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (23.59s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-200971 --driver=docker  --container-runtime=docker
E0829 18:33:39.286212   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-200971 --driver=docker  --container-runtime=docker: (23.589341952s)
--- PASS: TestImageBuild/serial/Setup (23.59s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-200971
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-200971: (2.484049857s)
--- PASS: TestImageBuild/serial/NormalBuild (2.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-200971
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-200971
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.70s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-200971
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.95s)

                                                
                                    
x
+
TestJSONOutput/start/Command (34.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-348805 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-348805 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (34.785690015s)
--- PASS: TestJSONOutput/start/Command (34.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-348805 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-348805 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.62s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-348805 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-348805 --output=json --user=testUser: (5.623854109s)
--- PASS: TestJSONOutput/stop/Command (5.62s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-921476 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-921476 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.339183ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6ccd6f6f-d8fc-4c26-a217-17510e5de3e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-921476] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0764fad3-236f-44fc-8ab0-5a65567c31af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"48c27cc3-a8c0-478b-816d-997b8a32ba2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"894bdf76-12e0-4892-a022-cda4badb6d99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig"}}
	{"specversion":"1.0","id":"1573ca02-5af7-4c5d-afda-3d6573c406f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube"}}
	{"specversion":"1.0","id":"bd2f84f1-5a25-4164-972d-306ec57abc4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e5e81f71-f0e1-44fb-a55b-08d793a047bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4a00afd0-1b6b-4598-a266-4b7324f3a0f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-921476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-921476
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-869029 --network=
E0829 18:34:46.832137   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-869029 --network=: (24.82235061s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-869029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-869029
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-869029: (1.95244978s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.05s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-544012 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-544012 --network=bridge: (21.146222272s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-544012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-544012
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-544012: (1.885273074s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.05s)

                                                
                                    
x
+
TestKicExistingNetwork (22.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-818242 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-818242 --network=existing-network: (20.261989026s)
helpers_test.go:175: Cleaning up "existing-network-818242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-818242
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-818242: (1.892367962s)
--- PASS: TestKicExistingNetwork (22.29s)

                                                
                                    
x
+
TestKicCustomSubnet (23.29s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-822454 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-822454 --subnet=192.168.60.0/24: (21.209112161s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-822454 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-822454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-822454
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-822454: (2.058712806s)
--- PASS: TestKicCustomSubnet (23.29s)

                                                
                                    
x
+
TestKicStaticIP (22.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-494031 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-494031 --static-ip=192.168.200.200: (20.704711942s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-494031 ip
helpers_test.go:175: Cleaning up "static-ip-494031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-494031
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-494031: (1.964836482s)
--- PASS: TestKicStaticIP (22.79s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (50.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-189237 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-189237 --driver=docker  --container-runtime=docker: (23.571622573s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-191716 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-191716 --driver=docker  --container-runtime=docker: (21.39809232s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-189237
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-191716
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-191716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-191716
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-191716: (2.002687843s)
helpers_test.go:175: Cleaning up "first-189237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-189237
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-189237: (2.056509671s)
--- PASS: TestMinikubeProfile (50.03s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.25s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-442860 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-442860 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.248076162s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.25s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-442860 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-453638 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-453638 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.312354523s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-453638 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-442860 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-442860 --alsologtostderr -v=5: (1.465510499s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-453638 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-453638
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-453638: (1.164898634s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.58s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-453638
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-453638: (7.581643577s)
--- PASS: TestMountStart/serial/RestartStopped (8.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-453638 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (56.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331513 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0829 18:38:11.584551   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-331513 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (55.759081203s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (56.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (47.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-331513 -- rollout status deployment/busybox: (3.532187922s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-9j8vj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-vq787 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-9j8vj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-vq787 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-9j8vj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-vq787 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (47.34s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-9j8vj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-9j8vj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-vq787 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-331513 -- exec busybox-7dff88458-vq787 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-331513 -v 3 --alsologtostderr
E0829 18:39:46.831995   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-331513 -v 3 --alsologtostderr: (18.157231051s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-331513 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp testdata/cp-test.txt multinode-331513:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp multinode-331513:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1695753065/001/cp-test_multinode-331513.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp multinode-331513:/home/docker/cp-test.txt multinode-331513-m02:/home/docker/cp-test_multinode-331513_multinode-331513-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m02 "sudo cat /home/docker/cp-test_multinode-331513_multinode-331513-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp multinode-331513:/home/docker/cp-test.txt multinode-331513-m03:/home/docker/cp-test_multinode-331513_multinode-331513-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m03 "sudo cat /home/docker/cp-test_multinode-331513_multinode-331513-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp testdata/cp-test.txt multinode-331513-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp multinode-331513-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1695753065/001/cp-test_multinode-331513-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp multinode-331513-m02:/home/docker/cp-test.txt multinode-331513:/home/docker/cp-test_multinode-331513-m02_multinode-331513.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513 "sudo cat /home/docker/cp-test_multinode-331513-m02_multinode-331513.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp multinode-331513-m02:/home/docker/cp-test.txt multinode-331513-m03:/home/docker/cp-test_multinode-331513-m02_multinode-331513-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m03 "sudo cat /home/docker/cp-test_multinode-331513-m02_multinode-331513-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp testdata/cp-test.txt multinode-331513-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp multinode-331513-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1695753065/001/cp-test_multinode-331513-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp multinode-331513-m03:/home/docker/cp-test.txt multinode-331513:/home/docker/cp-test_multinode-331513-m03_multinode-331513.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513 "sudo cat /home/docker/cp-test_multinode-331513-m03_multinode-331513.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 cp multinode-331513-m03:/home/docker/cp-test.txt multinode-331513-m02:/home/docker/cp-test_multinode-331513-m03_multinode-331513-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 ssh -n multinode-331513-m02 "sudo cat /home/docker/cp-test_multinode-331513-m03_multinode-331513-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-331513 node stop m03: (1.171629523s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-331513 status: exit status 7 (445.289036ms)

                                                
                                                
-- stdout --
	multinode-331513
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-331513-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-331513-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-331513 status --alsologtostderr: exit status 7 (441.77664ms)

                                                
                                                
-- stdout --
	multinode-331513
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-331513-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-331513-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:40:16.289994  222343 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:40:16.290113  222343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:40:16.290123  222343 out.go:358] Setting ErrFile to fd 2...
	I0829 18:40:16.290130  222343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:40:16.290346  222343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
	I0829 18:40:16.290528  222343 out.go:352] Setting JSON to false
	I0829 18:40:16.290559  222343 mustload.go:65] Loading cluster: multinode-331513
	I0829 18:40:16.290672  222343 notify.go:220] Checking for updates...
	I0829 18:40:16.290956  222343 config.go:182] Loaded profile config "multinode-331513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:40:16.290973  222343 status.go:255] checking status of multinode-331513 ...
	I0829 18:40:16.291360  222343 cli_runner.go:164] Run: docker container inspect multinode-331513 --format={{.State.Status}}
	I0829 18:40:16.308770  222343 status.go:330] multinode-331513 host status = "Running" (err=<nil>)
	I0829 18:40:16.308816  222343 host.go:66] Checking if "multinode-331513" exists ...
	I0829 18:40:16.309133  222343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-331513
	I0829 18:40:16.326950  222343 host.go:66] Checking if "multinode-331513" exists ...
	I0829 18:40:16.327270  222343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:40:16.327337  222343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-331513
	I0829 18:40:16.344611  222343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32910 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/multinode-331513/id_rsa Username:docker}
	I0829 18:40:16.430529  222343 ssh_runner.go:195] Run: systemctl --version
	I0829 18:40:16.434201  222343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:40:16.444021  222343 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0829 18:40:16.492533  222343 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-08-29 18:40:16.483805451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647943680 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0829 18:40:16.493110  222343 kubeconfig.go:125] found "multinode-331513" server: "https://192.168.67.2:8443"
	I0829 18:40:16.493145  222343 api_server.go:166] Checking apiserver status ...
	I0829 18:40:16.493176  222343 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 18:40:16.504385  222343 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2297/cgroup
	I0829 18:40:16.512735  222343 api_server.go:182] apiserver freezer: "13:freezer:/docker/4da8888885bbdbd04b8caae3cb39f083fade5bc97b0339758c421b926fbee39c/kubepods/burstable/pod762db447860034b057df2f3fed72152c/96df362a9bc6b69d0492ad3132f00528710fae1d1686e608cddaa7f66cb31513"
	I0829 18:40:16.512804  222343 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4da8888885bbdbd04b8caae3cb39f083fade5bc97b0339758c421b926fbee39c/kubepods/burstable/pod762db447860034b057df2f3fed72152c/96df362a9bc6b69d0492ad3132f00528710fae1d1686e608cddaa7f66cb31513/freezer.state
	I0829 18:40:16.520046  222343 api_server.go:204] freezer state: "THAWED"
	I0829 18:40:16.520066  222343 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0829 18:40:16.524427  222343 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0829 18:40:16.524447  222343 status.go:422] multinode-331513 apiserver status = Running (err=<nil>)
	I0829 18:40:16.524458  222343 status.go:257] multinode-331513 status: &{Name:multinode-331513 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:40:16.524475  222343 status.go:255] checking status of multinode-331513-m02 ...
	I0829 18:40:16.524700  222343 cli_runner.go:164] Run: docker container inspect multinode-331513-m02 --format={{.State.Status}}
	I0829 18:40:16.542022  222343 status.go:330] multinode-331513-m02 host status = "Running" (err=<nil>)
	I0829 18:40:16.542056  222343 host.go:66] Checking if "multinode-331513-m02" exists ...
	I0829 18:40:16.542317  222343 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-331513-m02
	I0829 18:40:16.558733  222343 host.go:66] Checking if "multinode-331513-m02" exists ...
	I0829 18:40:16.559003  222343 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 18:40:16.559040  222343 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-331513-m02
	I0829 18:40:16.575130  222343 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32915 SSHKeyPath:/home/jenkins/minikube-integration/19531-12929/.minikube/machines/multinode-331513-m02/id_rsa Username:docker}
	I0829 18:40:16.662490  222343 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 18:40:16.672539  222343 status.go:257] multinode-331513-m02 status: &{Name:multinode-331513-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:40:16.672574  222343 status.go:255] checking status of multinode-331513-m03 ...
	I0829 18:40:16.672805  222343 cli_runner.go:164] Run: docker container inspect multinode-331513-m03 --format={{.State.Status}}
	I0829 18:40:16.689235  222343 status.go:330] multinode-331513-m03 host status = "Stopped" (err=<nil>)
	I0829 18:40:16.689256  222343 status.go:343] host is not running, skipping remaining checks
	I0829 18:40:16.689276  222343 status.go:257] multinode-331513-m03 status: &{Name:multinode-331513-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.06s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-331513 node start m03 -v=7 --alsologtostderr: (9.20924809s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (97.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-331513
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-331513
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-331513: (22.385522505s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331513 --wait=true -v=8 --alsologtostderr
E0829 18:41:09.896261   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-331513 --wait=true -v=8 --alsologtostderr: (1m14.572544622s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-331513
--- PASS: TestMultiNode/serial/RestartKeepsNodes (97.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-331513 node delete m03: (4.633827862s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-331513 stop: (21.224918747s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-331513 status: exit status 7 (78.426312ms)

                                                
                                                
-- stdout --
	multinode-331513
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-331513-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-331513 status --alsologtostderr: exit status 7 (83.336987ms)

                                                
                                                
-- stdout --
	multinode-331513
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-331513-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 18:42:30.101791  237836 out.go:345] Setting OutFile to fd 1 ...
	I0829 18:42:30.102059  237836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:42:30.102069  237836 out.go:358] Setting ErrFile to fd 2...
	I0829 18:42:30.102073  237836 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 18:42:30.102311  237836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19531-12929/.minikube/bin
	I0829 18:42:30.102511  237836 out.go:352] Setting JSON to false
	I0829 18:42:30.102545  237836 mustload.go:65] Loading cluster: multinode-331513
	I0829 18:42:30.102609  237836 notify.go:220] Checking for updates...
	I0829 18:42:30.102979  237836 config.go:182] Loaded profile config "multinode-331513": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 18:42:30.102995  237836 status.go:255] checking status of multinode-331513 ...
	I0829 18:42:30.103454  237836 cli_runner.go:164] Run: docker container inspect multinode-331513 --format={{.State.Status}}
	I0829 18:42:30.124215  237836 status.go:330] multinode-331513 host status = "Stopped" (err=<nil>)
	I0829 18:42:30.124265  237836 status.go:343] host is not running, skipping remaining checks
	I0829 18:42:30.124272  237836 status.go:257] multinode-331513 status: &{Name:multinode-331513 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 18:42:30.124300  237836 status.go:255] checking status of multinode-331513-m02 ...
	I0829 18:42:30.124557  237836 cli_runner.go:164] Run: docker container inspect multinode-331513-m02 --format={{.State.Status}}
	I0829 18:42:30.142041  237836 status.go:330] multinode-331513-m02 host status = "Stopped" (err=<nil>)
	I0829 18:42:30.142063  237836 status.go:343] host is not running, skipping remaining checks
	I0829 18:42:30.142069  237836 status.go:257] multinode-331513-m02 status: &{Name:multinode-331513-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331513 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0829 18:43:11.583967   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-331513 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (51.113960884s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-331513 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-331513
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331513-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-331513-m02 --driver=docker  --container-runtime=docker: exit status 14 (61.663752ms)

                                                
                                                
-- stdout --
	* [multinode-331513-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-331513-m02' is duplicated with machine name 'multinode-331513-m02' in profile 'multinode-331513'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-331513-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-331513-m03 --driver=docker  --container-runtime=docker: (20.50247744s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-331513
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-331513: exit status 80 (260.61736ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-331513 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-331513-m03 already exists in multinode-331513-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-331513-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-331513-m03: (2.009445104s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.88s)

                                                
                                    
x
+
TestPreload (106.08s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-694094 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0829 18:44:34.647908   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:44:46.832036   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-694094 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m4.3111428s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-694094 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-694094 image pull gcr.io/k8s-minikube/busybox: (2.151581929s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-694094
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-694094: (10.667891144s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-694094 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-694094 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (26.685719478s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-694094 image list
helpers_test.go:175: Cleaning up "test-preload-694094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-694094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-694094: (2.072923447s)
--- PASS: TestPreload (106.08s)

                                                
                                    
x
+
TestScheduledStopUnix (94.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-964314 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-964314 --memory=2048 --driver=docker  --container-runtime=docker: (21.626666176s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-964314 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-964314 -n scheduled-stop-964314
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-964314 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-964314 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-964314 -n scheduled-stop-964314
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-964314
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-964314 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-964314
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-964314: exit status 7 (60.178859ms)

                                                
                                                
-- stdout --
	scheduled-stop-964314
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-964314 -n scheduled-stop-964314
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-964314 -n scheduled-stop-964314: exit status 7 (59.350356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-964314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-964314
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-964314: (1.60943617s)
--- PASS: TestScheduledStopUnix (94.45s)

                                                
                                    
x
+
TestSkaffold (101.02s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe3732037793 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-574429 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-574429 --memory=2600 --driver=docker  --container-runtime=docker: (21.760151421s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe3732037793 run --minikube-profile skaffold-574429 --kube-context skaffold-574429 --status-check=true --port-forward=false --interactive=false
E0829 18:48:11.585605   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe3732037793 run --minikube-profile skaffold-574429 --kube-context skaffold-574429 --status-check=true --port-forward=false --interactive=false: (1m2.454528037s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-757bd69d68-xs96r" [990de697-4c0b-4338-be24-c4e668119d25] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003328367s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-54fdf996f8-qzhx4" [e61e1102-bfe1-429e-90b4-7568d1a43aab] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003165303s
helpers_test.go:175: Cleaning up "skaffold-574429" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-574429
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-574429: (2.670339972s)
--- PASS: TestSkaffold (101.02s)

                                                
                                    
x
+
TestInsufficientStorage (9.74s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-192292 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-192292 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.652307339s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3117eb8-b96a-4901-b6ce-6688a89df6a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-192292] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a56927ae-c114-45d1-aef5-0c1bb361a3a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"3ad9d279-42be-4181-bf8e-ffb715cd4e77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bf0efb82-91ef-4783-8940-0404a426018b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig"}}
	{"specversion":"1.0","id":"ab52e0f3-fefb-4771-b46a-101994f6ec43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube"}}
	{"specversion":"1.0","id":"25bfc104-64c4-4fc5-90a6-917054291cdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1b81ea9c-322f-47fb-9435-04fdd37be8ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2c5bb88b-3f4a-498d-927f-dbb8397ef41f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b1cefb62-eb0a-49c3-b50a-ea9e2c44970c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0f856c9c-6fae-4df2-840b-8c4ab14f1257","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6713c914-3853-424e-a442-d62669e9da49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f4e8a900-44ab-4430-ad18-3e5fc7a4a0fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-192292\" primary control-plane node in \"insufficient-storage-192292\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c35c60a-9a58-4051-a990-8f9c9f8bacf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724775115-19521 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7cd926fc-0116-4ef8-9f9b-4adecb8a312b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"48f160d5-33a2-414e-be67-1e5a2b3a28c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-192292 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-192292 --output=json --layout=cluster: exit status 7 (248.150415ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-192292","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-192292","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 18:48:57.924943  277229 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-192292" does not appear in /home/jenkins/minikube-integration/19531-12929/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-192292 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-192292 --output=json --layout=cluster: exit status 7 (242.544043ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-192292","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-192292","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 18:48:58.168239  277329 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-192292" does not appear in /home/jenkins/minikube-integration/19531-12929/kubeconfig
	E0829 18:48:58.177527  277329 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/insufficient-storage-192292/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-192292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-192292
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-192292: (1.598309532s)
--- PASS: TestInsufficientStorage (9.74s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2890147990 start -p running-upgrade-825767 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2890147990 start -p running-upgrade-825767 --memory=2200 --vm-driver=docker  --container-runtime=docker: (31.852978569s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-825767 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-825767 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.821366816s)
helpers_test.go:175: Cleaning up "running-upgrade-825767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-825767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-825767: (6.854602636s)
--- PASS: TestRunningBinaryUpgrade (72.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (332.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-601644 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-601644 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.221916448s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-601644
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-601644: (1.283537646s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-601644 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-601644 status --format={{.Host}}: exit status 7 (75.082052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-601644 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0829 18:49:46.831287   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-601644 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m34.37621178s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-601644 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-601644 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-601644 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (89.477681ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-601644] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-601644
	    minikube start -p kubernetes-upgrade-601644 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6016442 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-601644 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-601644 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0829 18:54:17.325260   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-601644 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.221807294s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-601644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-601644
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-601644: (2.266733523s)
--- PASS: TestKubernetesUpgrade (332.60s)

                                                
                                    
x
+
TestMissingContainerUpgrade (186.8s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3943393407 start -p missing-upgrade-586537 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3943393407 start -p missing-upgrade-586537 --memory=2200 --driver=docker  --container-runtime=docker: (1m57.020813747s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-586537
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-586537: (10.433584613s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-586537
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-586537 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-586537 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.803836849s)
helpers_test.go:175: Cleaning up "missing-upgrade-586537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-586537
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-586537: (2.906623383s)
--- PASS: TestMissingContainerUpgrade (186.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (175.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4074885558 start -p stopped-upgrade-558047 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4074885558 start -p stopped-upgrade-558047 --memory=2200 --vm-driver=docker  --container-runtime=docker: (2m11.177010622s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4074885558 -p stopped-upgrade-558047 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4074885558 -p stopped-upgrade-558047 stop: (10.806416746s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-558047 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-558047 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.814681504s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (175.80s)

                                                
                                    
x
+
TestPause/serial/Start (40.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-914107 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-914107 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (40.323087749s)
--- PASS: TestPause/serial/Start (40.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-558047
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-558047: (1.199472056s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-497268 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-497268 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (77.331371ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-497268] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19531
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19531-12929/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19531-12929/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-497268 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-497268 --driver=docker  --container-runtime=docker: (25.653263479s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-497268 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.97s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-914107 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-914107 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.698232873s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-497268 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-497268 --no-kubernetes --driver=docker  --container-runtime=docker: (6.502858447s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-497268 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-497268 status -o json: exit status 2 (375.132511ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-497268","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-497268
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-497268: (1.857575045s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-497268 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-497268 --no-kubernetes --driver=docker  --container-runtime=docker: (7.778287337s)
--- PASS: TestNoKubernetes/serial/Start (7.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-497268 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-497268 "sudo systemctl is-active --quiet service kubelet": exit status 1 (248.059451ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-497268
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-497268: (1.191846661s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-497268 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-497268 --driver=docker  --container-runtime=docker: (8.469118314s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.47s)

                                                
                                    
x
+
TestPause/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-914107 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.51s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-914107 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-914107 --output=json --layout=cluster: exit status 2 (346.214007ms)

                                                
                                                
-- stdout --
	{"Name":"pause-914107","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-914107","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-914107 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-497268 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-497268 "sudo systemctl is-active --quiet service kubelet": exit status 1 (289.587636ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.61s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-914107 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.61s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.3s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-914107 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-914107 --alsologtostderr -v=5: (2.299923141s)
--- PASS: TestPause/serial/DeletePaused (2.30s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.341457859s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-914107
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-914107: exit status 1 (16.61316ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-914107: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (131.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-247974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0829 18:53:36.347786   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:36.354502   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:36.365899   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:36.387902   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:36.429293   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:36.511047   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:36.672612   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:36.994436   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:37.636468   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:38.918035   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-247974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m11.293994388s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (131.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-967248 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 18:53:46.601392   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:53:56.843120   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-967248 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m7.751252341s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-189670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 18:54:46.831796   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-189670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m7.130822167s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-967248 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d03a7e94-e8c4-464a-bafa-35d7d8e8a02e] Pending
helpers_test.go:344: "busybox" [d03a7e94-e8c4-464a-bafa-35d7d8e8a02e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d03a7e94-e8c4-464a-bafa-35d7d8e8a02e] Running
E0829 18:54:58.287526   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003776155s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-967248 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-967248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-967248 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-967248 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-967248 --alsologtostderr -v=3: (10.62394895s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-967248 -n no-preload-967248
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-967248 -n no-preload-967248: exit status 7 (100.371998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-967248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-967248 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-967248 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m22.652136377s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-967248 -n no-preload-967248
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-189670 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cbcf0d8c-247e-4a19-a86d-8d87e20813d3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cbcf0d8c-247e-4a19-a86d-8d87e20813d3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003951342s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-189670 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-247974 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [31669a06-ca27-4c14-a4b5-b7552424b833] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [31669a06-ca27-4c14-a4b5-b7552424b833] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004552183s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-247974 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-189670 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-189670 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-247974 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-247974 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-189670 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-189670 --alsologtostderr -v=3: (10.666243691s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-247974 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-247974 --alsologtostderr -v=3: (10.949697806s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-189670 -n embed-certs-189670
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-189670 -n embed-certs-189670: exit status 7 (77.225228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-189670 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-189670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-189670 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m26.203124907s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-189670 -n embed-certs-189670
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247974 -n old-k8s-version-247974
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247974 -n old-k8s-version-247974: exit status 7 (66.303766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-247974 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-247974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0829 18:56:20.209723   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-247974 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m20.456148935s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-247974 -n old-k8s-version-247974
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-282820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 18:57:49.897849   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-282820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (1m8.966471758s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-282820 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d505470-7b33-44e5-9d61-daf37545d950] Pending
helpers_test.go:344: "busybox" [2d505470-7b33-44e5-9d61-daf37545d950] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2d505470-7b33-44e5-9d61-daf37545d950] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003885692s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-282820 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-282820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-282820 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-282820 --alsologtostderr -v=3
E0829 18:58:11.584299   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/functional-995951/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-282820 --alsologtostderr -v=3: (10.73147049s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-282820 -n default-k8s-diff-port-282820
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-282820 -n default-k8s-diff-port-282820: exit status 7 (106.070533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-282820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-282820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-282820 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (4m23.063338747s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-282820 -n default-k8s-diff-port-282820
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5rtnv" [fd2bcd71-e9a7-4c73-886e-5b321d285da3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003363316s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5rtnv" [fd2bcd71-e9a7-4c73-886e-5b321d285da3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00315585s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-247974 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-247974 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-247974 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247974 -n old-k8s-version-247974
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247974 -n old-k8s-version-247974: exit status 2 (296.096389ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-247974 -n old-k8s-version-247974
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-247974 -n old-k8s-version-247974: exit status 2 (313.468444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-247974 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-247974 -n old-k8s-version-247974
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-247974 -n old-k8s-version-247974
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-773140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
E0829 18:59:04.051462   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/skaffold-574429/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-773140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (31.806856637s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-773140 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-773140 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-773140 --alsologtostderr -v=3: (9.52537985s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-773140 -n newest-cni-773140
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-773140 -n newest-cni-773140: exit status 7 (76.327719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-773140 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-773140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-773140 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.0: (14.26206577s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-773140 -n newest-cni-773140
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-773140 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-773140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-773140 -n newest-cni-773140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-773140 -n newest-cni-773140: exit status 2 (290.560608ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-773140 -n newest-cni-773140
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-773140 -n newest-cni-773140: exit status 2 (290.510444ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-773140 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-773140 -n newest-cni-773140
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-773140 -n newest-cni-773140
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fxmjm" [1400c5e6-c7f8-4a01-bdc2-949a494913b4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00384873s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (41.501986117s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fxmjm" [1400c5e6-c7f8-4a01-bdc2-949a494913b4] Running
E0829 18:59:46.831452   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/addons-653578/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004565352s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-967248 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-967248 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-967248 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-967248 -n no-preload-967248
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-967248 -n no-preload-967248: exit status 2 (289.368064ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-967248 -n no-preload-967248
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-967248 -n no-preload-967248: exit status 2 (294.890331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-967248 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-967248 -n no-preload-967248
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-967248 -n no-preload-967248
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0829 18:59:54.135755   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:54.142160   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:54.153773   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:54.175764   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:54.217225   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:54.298917   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:54.460793   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:54.782284   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:55.423864   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:56.705814   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 18:59:59.267264   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:00:04.389095   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:00:14.631385   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (59.773550943s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-487775 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-487775 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mkvtz" [61351abe-fb4c-4dd0-a171-7bd360facdd1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mkvtz" [61351abe-fb4c-4dd0-a171-7bd360facdd1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004302518s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-l8jp5" [ecdb520c-7138-4526-9557-17b1bf25bea9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003465657s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (16.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-487775 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-487775 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.144193143s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-487775 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (16.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-l8jp5" [ecdb520c-7138-4526-9557-17b1bf25bea9] Running
E0829 19:00:35.112770   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004474052s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-189670 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-189670 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-189670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-189670 -n embed-certs-189670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-189670 -n embed-certs-189670: exit status 2 (306.252577ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-189670 -n embed-certs-189670
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-189670 -n embed-certs-189670: exit status 2 (275.867139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-189670 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-189670 -n embed-certs-189670
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-189670 -n embed-certs-189670
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0829 19:00:42.487770   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/old-k8s-version-247974/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:00:45.049872   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/old-k8s-version-247974/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m7.472653472s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gdvq8" [812c8f28-af01-4221-8518-f248cde5ab7e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00428296s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-487775 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-487775 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nc7c5" [706b2d81-5644-4f39-8270-35ed7e15ec82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0829 19:01:00.414034   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/old-k8s-version-247974/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nc7c5" [706b2d81-5644-4f39-8270-35ed7e15ec82] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004918991s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (48.996743655s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-487775 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (68.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m8.552301486s)
--- PASS: TestNetworkPlugins/group/false/Start (68.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nqprb" [c2111d4b-5ef8-4569-a551-0b86c0c64e09] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005004359s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-487775 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-487775 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fhpt7" [ee173d09-efef-42ae-8eaf-e7b264496649] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fhpt7" [ee173d09-efef-42ae-8eaf-e7b264496649] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003241606s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-487775 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-487775 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8svc4" [88f6e7a0-8117-48bb-9dd3-d93968e376a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8svc4" [88f6e7a0-8117-48bb-9dd3-d93968e376a1] Running
E0829 19:02:01.858293   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/old-k8s-version-247974/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004124122s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-487775 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-487775 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (36.389955886s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (36.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0829 19:02:37.995700   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/no-preload-967248/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (47.149919791s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-487775 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-487775 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ttts2" [3ec01890-a3c9-4e10-9537-46efab012e95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ttts2" [3ec01890-a3c9-4e10-9537-46efab012e95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.004801029s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5sphb" [23b932e3-b05a-4d1e-836c-db122a37c540] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002915382s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5sphb" [23b932e3-b05a-4d1e-836c-db122a37c540] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004440107s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-282820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-487775 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-282820 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-282820 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-282820 -n default-k8s-diff-port-282820
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-282820 -n default-k8s-diff-port-282820: exit status 2 (356.096432ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-282820 -n default-k8s-diff-port-282820
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-282820 -n default-k8s-diff-port-282820: exit status 2 (328.495085ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-282820 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-282820 -n default-k8s-diff-port-282820
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-282820 -n default-k8s-diff-port-282820
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0829 19:02:57.525040   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:02:57.532321   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:02:57.543670   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:02:57.565833   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:02:57.608925   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:02:57.691341   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:02:57.854536   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:02:58.176632   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:02:58.818623   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
E0829 19:03:00.100041   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m8.078046074s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-487775 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-487775 replace --force -f testdata/netcat-deployment.yaml
E0829 19:03:02.661865   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dn5gl" [9f78e79a-4b5e-4ce7-b3f1-9ac2d8efe529] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0829 19:03:07.785848   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-dn5gl" [9f78e79a-4b5e-4ce7-b3f1-9ac2d8efe529] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004788388s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (68.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-487775 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m8.221642315s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (68.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (20.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-487775 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-487775 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.168210591s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-487775 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context enable-default-cni-487775 exec deployment/netcat -- nslookup kubernetes.default: (5.118068944s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (20.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t4rsn" [a2007b9a-600a-40df-b692-5c5ec1e721cb] Running
E0829 19:03:18.028052   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/default-k8s-diff-port-282820/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004214264s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-487775 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-487775 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nj82t" [ca92ab6d-f572-4c4f-aec9-20aabd37c9ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0829 19:03:23.780230   19739 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/old-k8s-version-247974/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nj82t" [ca92ab6d-f572-4c4f-aec9-20aabd37c9ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004939794s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-487775 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-487775 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-487775 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kz4rc" [95dabb0f-6d6b-4162-bdba-4e2c4a5a810b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kz4rc" [95dabb0f-6d6b-4162-bdba-4e2c4a5a810b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004106738s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-487775 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-487775 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-487775 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-62ltt" [de4de454-644d-4126-bbb5-c8973da3b7e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-62ltt" [de4de454-644d-4126-bbb5-c8973da3b7e9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.00291807s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-487775 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-487775 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.12s)

                                                
                                    

Test skip (20/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-161107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-161107
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-487775 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-487775" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19531-12929/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 29 Aug 2024 18:49:51 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-601644
contexts:
- context:
cluster: kubernetes-upgrade-601644
user: kubernetes-upgrade-601644
name: kubernetes-upgrade-601644
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-601644
user:
client-certificate: /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/kubernetes-upgrade-601644/client.crt
client-key: /home/jenkins/minikube-integration/19531-12929/.minikube/profiles/kubernetes-upgrade-601644/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-487775

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-487775" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-487775"

                                                
                                                
----------------------- debugLogs end: cilium-487775 [took: 3.314571719s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-487775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-487775
--- SKIP: TestNetworkPlugins/group/cilium (3.47s)

                                                
                                    
Copied to clipboard