Test Report: Docker_Linux 19678

                    
                      8ef5536409705b0cbf1ed8a719bbf7f792426b16:2024-09-20:36299
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 72.53
x
+
TestAddons/parallel/Registry (72.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 1.807401ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-tl9b5" [0da9eb92-a72a-4e20-97a3-ff9fecea622f] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002686559s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xldsf" [2d465593-59aa-4922-8d50-d95af40b4d34] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003327952s
addons_test.go:338: (dbg) Run:  kubectl --context addons-535596 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-535596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-535596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.143739157s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-535596 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 ip
2024/09/20 18:05:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-535596
helpers_test.go:235: (dbg) docker inspect addons-535596:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1d3e84c4b49a4de7f52fa503b5d86e772302aa6aa2de852054e504ad5b706d5",
	        "Created": "2024-09-20T17:52:13.830742372Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 89255,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T17:52:13.954637058Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bb3bcbaabeeeadbf6b43ae7d1d07e504b3c8a94ec024df89bcb237eba4f5e9b3",
	        "ResolvConfPath": "/var/lib/docker/containers/f1d3e84c4b49a4de7f52fa503b5d86e772302aa6aa2de852054e504ad5b706d5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1d3e84c4b49a4de7f52fa503b5d86e772302aa6aa2de852054e504ad5b706d5/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1d3e84c4b49a4de7f52fa503b5d86e772302aa6aa2de852054e504ad5b706d5/hosts",
	        "LogPath": "/var/lib/docker/containers/f1d3e84c4b49a4de7f52fa503b5d86e772302aa6aa2de852054e504ad5b706d5/f1d3e84c4b49a4de7f52fa503b5d86e772302aa6aa2de852054e504ad5b706d5-json.log",
	        "Name": "/addons-535596",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-535596:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-535596",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/23f9af8f05d23bfa2780339d0fcda0f5af49642b088d8d57cee3de684fe57987-init/diff:/var/lib/docker/overlay2/d17c92da102880879ea982f35719cfe968bb9bb2362bb970ef505dec0cc6189e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/23f9af8f05d23bfa2780339d0fcda0f5af49642b088d8d57cee3de684fe57987/merged",
	                "UpperDir": "/var/lib/docker/overlay2/23f9af8f05d23bfa2780339d0fcda0f5af49642b088d8d57cee3de684fe57987/diff",
	                "WorkDir": "/var/lib/docker/overlay2/23f9af8f05d23bfa2780339d0fcda0f5af49642b088d8d57cee3de684fe57987/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-535596",
	                "Source": "/var/lib/docker/volumes/addons-535596/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-535596",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-535596",
	                "name.minikube.sigs.k8s.io": "addons-535596",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c192d44b297ecd10d8e7a2886fb3d196a307f696482c389224772f4744648130",
	            "SandboxKey": "/var/run/docker/netns/c192d44b297e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-535596": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6eb80bab0fbbbd37106971346e69e5a38c60d83f19a984e8175e628f7c6a185f",
	                    "EndpointID": "81af9e820d8271b041f5931243c533ae49e3c4196b5f14101353caa1e905e809",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-535596",
	                        "f1d3e84c4b49"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-535596 -n addons-535596
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-952527 | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC |                     |
	|         | download-docker-952527                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-952527                                                                   | download-docker-952527 | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC | 20 Sep 24 17:51 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-440140   | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC |                     |
	|         | binary-mirror-440140                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39251                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-440140                                                                     | binary-mirror-440140   | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC | 20 Sep 24 17:51 UTC |
	| addons  | enable dashboard -p                                                                         | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC |                     |
	|         | addons-535596                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC |                     |
	|         | addons-535596                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-535596 --wait=true                                                                | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC | 20 Sep 24 17:55 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-535596 addons disable                                                                | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 17:55 UTC | 20 Sep 24 17:56 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | -p addons-535596                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | -p addons-535596                                                                            |                        |         |         |                     |                     |
	| addons  | addons-535596 addons                                                                        | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-535596 addons disable                                                                | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | addons-535596                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-535596 ssh curl -s                                                                   | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-535596 ip                                                                            | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	| addons  | addons-535596 addons disable                                                                | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-535596 addons disable                                                                | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-535596 addons disable                                                                | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | addons-535596                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-535596 ssh cat                                                                       | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC | 20 Sep 24 18:04 UTC |
	|         | /opt/local-path-provisioner/pvc-1a02ad53-06eb-4e3d-819b-4c8d67dfc852_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-535596 addons disable                                                                | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:04 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-535596 addons                                                                        | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC | 20 Sep 24 18:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-535596 addons                                                                        | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC | 20 Sep 24 18:05 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-535596 ip                                                                            | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC | 20 Sep 24 18:05 UTC |
	| addons  | addons-535596 addons disable                                                                | addons-535596          | jenkins | v1.34.0 | 20 Sep 24 18:05 UTC | 20 Sep 24 18:05 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:51:50
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:51:50.758772   88492 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:51:50.758902   88492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:51:50.758912   88492 out.go:358] Setting ErrFile to fd 2...
	I0920 17:51:50.758916   88492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:51:50.759074   88492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
	I0920 17:51:50.759700   88492 out.go:352] Setting JSON to false
	I0920 17:51:50.760579   88492 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5663,"bootTime":1726849048,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:51:50.760675   88492 start.go:139] virtualization: kvm guest
	I0920 17:51:50.762745   88492 out.go:177] * [addons-535596] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:51:50.764229   88492 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 17:51:50.764244   88492 notify.go:220] Checking for updates...
	I0920 17:51:50.766868   88492 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:51:50.768321   88492 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	I0920 17:51:50.769675   88492 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	I0920 17:51:50.771003   88492 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 17:51:50.772204   88492 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 17:51:50.773602   88492 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:51:50.793990   88492 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:51:50.794050   88492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:51:50.840163   88492 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 17:51:50.832126357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:51:50.840264   88492 docker.go:318] overlay module found
	I0920 17:51:50.841986   88492 out.go:177] * Using the docker driver based on user configuration
	I0920 17:51:50.843233   88492 start.go:297] selected driver: docker
	I0920 17:51:50.843250   88492 start.go:901] validating driver "docker" against <nil>
	I0920 17:51:50.843270   88492 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 17:51:50.844026   88492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:51:50.887439   88492 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-20 17:51:50.879057341 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:51:50.887634   88492 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:51:50.887890   88492 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:51:50.889406   88492 out.go:177] * Using Docker driver with root privileges
	I0920 17:51:50.890318   88492 cni.go:84] Creating CNI manager for ""
	I0920 17:51:50.890403   88492 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:51:50.890415   88492 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 17:51:50.890463   88492 start.go:340] cluster config:
	{Name:addons-535596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-535596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:51:50.891744   88492 out.go:177] * Starting "addons-535596" primary control-plane node in "addons-535596" cluster
	I0920 17:51:50.892888   88492 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 17:51:50.894091   88492 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 17:51:50.895132   88492 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:51:50.895158   88492 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 17:51:50.895174   88492 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-80428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 17:51:50.895186   88492 cache.go:56] Caching tarball of preloaded images
	I0920 17:51:50.895284   88492 preload.go:172] Found /home/jenkins/minikube-integration/19678-80428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 17:51:50.895299   88492 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 17:51:50.895649   88492 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/config.json ...
	I0920 17:51:50.895677   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/config.json: {Name:mk306ba9e0556005f3109419766f916f5414f21a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:51:50.909397   88492 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 17:51:50.909497   88492 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 17:51:50.909511   88492 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 17:51:50.909516   88492 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 17:51:50.909528   88492 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 17:51:50.909535   88492 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 17:52:02.524862   88492 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 17:52:02.524923   88492 cache.go:194] Successfully downloaded all kic artifacts
	I0920 17:52:02.524998   88492 start.go:360] acquireMachinesLock for addons-535596: {Name:mkf2018feba7c889b64f8b8319e0c59aa6995edb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 17:52:02.525125   88492 start.go:364] duration metric: took 100.594µs to acquireMachinesLock for "addons-535596"
	I0920 17:52:02.525148   88492 start.go:93] Provisioning new machine with config: &{Name:addons-535596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-535596 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:52:02.525228   88492 start.go:125] createHost starting for "" (driver="docker")
	I0920 17:52:02.527018   88492 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 17:52:02.527262   88492 start.go:159] libmachine.API.Create for "addons-535596" (driver="docker")
	I0920 17:52:02.527300   88492 client.go:168] LocalClient.Create starting
	I0920 17:52:02.527395   88492 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-80428/.minikube/certs/ca.pem
	I0920 17:52:02.640659   88492 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-80428/.minikube/certs/cert.pem
	I0920 17:52:02.754099   88492 cli_runner.go:164] Run: docker network inspect addons-535596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 17:52:02.769341   88492 cli_runner.go:211] docker network inspect addons-535596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 17:52:02.769409   88492 network_create.go:284] running [docker network inspect addons-535596] to gather additional debugging logs...
	I0920 17:52:02.769431   88492 cli_runner.go:164] Run: docker network inspect addons-535596
	W0920 17:52:02.783729   88492 cli_runner.go:211] docker network inspect addons-535596 returned with exit code 1
	I0920 17:52:02.783753   88492 network_create.go:287] error running [docker network inspect addons-535596]: docker network inspect addons-535596: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-535596 not found
	I0920 17:52:02.783769   88492 network_create.go:289] output of [docker network inspect addons-535596]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-535596 not found
	
	** /stderr **
	I0920 17:52:02.783855   88492 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 17:52:02.798604   88492 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a7ef60}
	I0920 17:52:02.798652   88492 network_create.go:124] attempt to create docker network addons-535596 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 17:52:02.798693   88492 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-535596 addons-535596
	I0920 17:52:02.853660   88492 network_create.go:108] docker network addons-535596 192.168.49.0/24 created
	I0920 17:52:02.853696   88492 kic.go:121] calculated static IP "192.168.49.2" for the "addons-535596" container
	I0920 17:52:02.853766   88492 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 17:52:02.867673   88492 cli_runner.go:164] Run: docker volume create addons-535596 --label name.minikube.sigs.k8s.io=addons-535596 --label created_by.minikube.sigs.k8s.io=true
	I0920 17:52:02.883449   88492 oci.go:103] Successfully created a docker volume addons-535596
	I0920 17:52:02.883538   88492 cli_runner.go:164] Run: docker run --rm --name addons-535596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-535596 --entrypoint /usr/bin/test -v addons-535596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 17:52:09.938537   88492 cli_runner.go:217] Completed: docker run --rm --name addons-535596-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-535596 --entrypoint /usr/bin/test -v addons-535596:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (7.054918879s)
	I0920 17:52:09.938566   88492 oci.go:107] Successfully prepared a docker volume addons-535596
	I0920 17:52:09.938597   88492 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:52:09.938627   88492 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 17:52:09.938697   88492 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-80428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-535596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 17:52:13.772162   88492 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-80428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-535596:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (3.833421013s)
	I0920 17:52:13.772195   88492 kic.go:203] duration metric: took 3.83356408s to extract preloaded images to volume ...
	W0920 17:52:13.772336   88492 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 17:52:13.772448   88492 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 17:52:13.816652   88492 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-535596 --name addons-535596 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-535596 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-535596 --network addons-535596 --ip 192.168.49.2 --volume addons-535596:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 17:52:14.112809   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Running}}
	I0920 17:52:14.129396   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:14.146338   88492 cli_runner.go:164] Run: docker exec addons-535596 stat /var/lib/dpkg/alternatives/iptables
	I0920 17:52:14.186318   88492 oci.go:144] the created container "addons-535596" has a running status.
	I0920 17:52:14.186351   88492 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa...
	I0920 17:52:14.284607   88492 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 17:52:14.304079   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:14.320873   88492 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 17:52:14.320899   88492 kic_runner.go:114] Args: [docker exec --privileged addons-535596 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 17:52:14.363038   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:14.382626   88492 machine.go:93] provisionDockerMachine start ...
	I0920 17:52:14.382730   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:14.402533   88492 main.go:141] libmachine: Using SSH client type: native
	I0920 17:52:14.402782   88492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 17:52:14.402802   88492 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 17:52:14.403488   88492 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42790->127.0.0.1:32768: read: connection reset by peer
	I0920 17:52:17.533606   88492 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-535596
	
	I0920 17:52:17.533640   88492 ubuntu.go:169] provisioning hostname "addons-535596"
	I0920 17:52:17.533719   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:17.549386   88492 main.go:141] libmachine: Using SSH client type: native
	I0920 17:52:17.549564   88492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 17:52:17.549576   88492 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-535596 && echo "addons-535596" | sudo tee /etc/hostname
	I0920 17:52:17.688479   88492 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-535596
	
	I0920 17:52:17.688557   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:17.705468   88492 main.go:141] libmachine: Using SSH client type: native
	I0920 17:52:17.705635   88492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 17:52:17.705651   88492 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-535596' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-535596/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-535596' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 17:52:17.833935   88492 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 17:52:17.833969   88492 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-80428/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-80428/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-80428/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-80428/.minikube}
	I0920 17:52:17.834002   88492 ubuntu.go:177] setting up certificates
	I0920 17:52:17.834013   88492 provision.go:84] configureAuth start
	I0920 17:52:17.834061   88492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-535596
	I0920 17:52:17.849191   88492 provision.go:143] copyHostCerts
	I0920 17:52:17.849287   88492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-80428/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-80428/.minikube/ca.pem (1078 bytes)
	I0920 17:52:17.849393   88492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-80428/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-80428/.minikube/cert.pem (1123 bytes)
	I0920 17:52:17.849451   88492 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-80428/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-80428/.minikube/key.pem (1675 bytes)
	I0920 17:52:17.849498   88492 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-80428/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-80428/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-80428/.minikube/certs/ca-key.pem org=jenkins.addons-535596 san=[127.0.0.1 192.168.49.2 addons-535596 localhost minikube]
	I0920 17:52:18.052971   88492 provision.go:177] copyRemoteCerts
	I0920 17:52:18.053033   88492 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 17:52:18.053068   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:18.070416   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:18.162329   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 17:52:18.182705   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 17:52:18.202478   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 17:52:18.222066   88492 provision.go:87] duration metric: took 388.038771ms to configureAuth
	I0920 17:52:18.222089   88492 ubuntu.go:193] setting minikube options for container-runtime
	I0920 17:52:18.222242   88492 config.go:182] Loaded profile config "addons-535596": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:52:18.222289   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:18.237642   88492 main.go:141] libmachine: Using SSH client type: native
	I0920 17:52:18.237818   88492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 17:52:18.237831   88492 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 17:52:18.370415   88492 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 17:52:18.370436   88492 ubuntu.go:71] root file system type: overlay
	I0920 17:52:18.370556   88492 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 17:52:18.370626   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:18.387675   88492 main.go:141] libmachine: Using SSH client type: native
	I0920 17:52:18.387852   88492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 17:52:18.387909   88492 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 17:52:18.528106   88492 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 17:52:18.528176   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:18.544103   88492 main.go:141] libmachine: Using SSH client type: native
	I0920 17:52:18.544267   88492 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0920 17:52:18.544291   88492 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 17:52:19.226190   88492 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:41.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-20 17:52:18.522847491 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0920 17:52:19.226235   88492 machine.go:96] duration metric: took 4.843578185s to provisionDockerMachine
	I0920 17:52:19.226253   88492 client.go:171] duration metric: took 16.698943791s to LocalClient.Create
	I0920 17:52:19.226275   88492 start.go:167] duration metric: took 16.699013355s to libmachine.API.Create "addons-535596"
	I0920 17:52:19.226287   88492 start.go:293] postStartSetup for "addons-535596" (driver="docker")
	I0920 17:52:19.226302   88492 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 17:52:19.226369   88492 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 17:52:19.226416   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:19.244305   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:19.338514   88492 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 17:52:19.341247   88492 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 17:52:19.341277   88492 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 17:52:19.341284   88492 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 17:52:19.341292   88492 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 17:52:19.341304   88492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-80428/.minikube/addons for local assets ...
	I0920 17:52:19.341354   88492 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-80428/.minikube/files for local assets ...
	I0920 17:52:19.341378   88492 start.go:296] duration metric: took 115.083184ms for postStartSetup
	I0920 17:52:19.341643   88492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-535596
	I0920 17:52:19.357438   88492 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/config.json ...
	I0920 17:52:19.357692   88492 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 17:52:19.357740   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:19.372913   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:19.462776   88492 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 17:52:19.466637   88492 start.go:128] duration metric: took 16.941391509s to createHost
	I0920 17:52:19.466663   88492 start.go:83] releasing machines lock for "addons-535596", held for 16.941525074s
	I0920 17:52:19.466720   88492 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-535596
	I0920 17:52:19.482363   88492 ssh_runner.go:195] Run: cat /version.json
	I0920 17:52:19.482404   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:19.482459   88492 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 17:52:19.482566   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:19.498017   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:19.498169   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:19.651366   88492 ssh_runner.go:195] Run: systemctl --version
	I0920 17:52:19.655160   88492 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 17:52:19.658847   88492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 17:52:19.679624   88492 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 17:52:19.679695   88492 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 17:52:19.702736   88492 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 17:52:19.702762   88492 start.go:495] detecting cgroup driver to use...
	I0920 17:52:19.702795   88492 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:52:19.702906   88492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:52:19.716416   88492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 17:52:19.724378   88492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 17:52:19.732233   88492 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 17:52:19.732274   88492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 17:52:19.740117   88492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:52:19.748220   88492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 17:52:19.755985   88492 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 17:52:19.763885   88492 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 17:52:19.771351   88492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 17:52:19.779134   88492 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 17:52:19.786823   88492 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 17:52:19.794672   88492 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 17:52:19.801433   88492 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 17:52:19.808043   88492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:52:19.879466   88492 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 17:52:19.965879   88492 start.go:495] detecting cgroup driver to use...
	I0920 17:52:19.965926   88492 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 17:52:19.965976   88492 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 17:52:19.976896   88492 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 17:52:19.976963   88492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 17:52:19.987607   88492 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 17:52:20.002141   88492 ssh_runner.go:195] Run: which cri-dockerd
	I0920 17:52:20.005260   88492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 17:52:20.013635   88492 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 17:52:20.030933   88492 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 17:52:20.110007   88492 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 17:52:20.202052   88492 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 17:52:20.202219   88492 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 17:52:20.218123   88492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:52:20.305626   88492 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 17:52:20.543328   88492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 17:52:20.553811   88492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:52:20.563516   88492 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 17:52:20.639039   88492 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 17:52:20.715584   88492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:52:20.788109   88492 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 17:52:20.799429   88492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 17:52:20.808468   88492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:52:20.884356   88492 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 17:52:20.939853   88492 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 17:52:20.939933   88492 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 17:52:20.943553   88492 start.go:563] Will wait 60s for crictl version
	I0920 17:52:20.943615   88492 ssh_runner.go:195] Run: which crictl
	I0920 17:52:20.946790   88492 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 17:52:20.976847   88492 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0920 17:52:20.976899   88492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 17:52:20.998178   88492 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 17:52:21.020005   88492 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0920 17:52:21.020079   88492 cli_runner.go:164] Run: docker network inspect addons-535596 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 17:52:21.035190   88492 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 17:52:21.038316   88492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:52:21.047810   88492 kubeadm.go:883] updating cluster {Name:addons-535596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-535596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 17:52:21.047926   88492 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 17:52:21.047988   88492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 17:52:21.065558   88492 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 17:52:21.065582   88492 docker.go:615] Images already preloaded, skipping extraction
	I0920 17:52:21.065628   88492 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 17:52:21.084060   88492 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 17:52:21.084083   88492 cache_images.go:84] Images are preloaded, skipping loading
	I0920 17:52:21.084094   88492 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0920 17:52:21.084222   88492 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-535596 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-535596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 17:52:21.084272   88492 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 17:52:21.127871   88492 cni.go:84] Creating CNI manager for ""
	I0920 17:52:21.127896   88492 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:52:21.127908   88492 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 17:52:21.127926   88492 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-535596 NodeName:addons-535596 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 17:52:21.128058   88492 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-535596"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 17:52:21.128111   88492 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 17:52:21.136023   88492 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 17:52:21.136077   88492 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 17:52:21.144242   88492 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 17:52:21.159369   88492 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 17:52:21.174038   88492 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0920 17:52:21.189116   88492 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 17:52:21.192033   88492 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 17:52:21.200900   88492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:52:21.271252   88492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:52:21.282442   88492 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596 for IP: 192.168.49.2
	I0920 17:52:21.282464   88492 certs.go:194] generating shared ca certs ...
	I0920 17:52:21.282486   88492 certs.go:226] acquiring lock for ca certs: {Name:mk95bc94f37f51e4ab78abaa125a6379bd5fd7b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:21.282637   88492 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-80428/.minikube/ca.key
	I0920 17:52:21.430299   88492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-80428/.minikube/ca.crt ...
	I0920 17:52:21.430330   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/ca.crt: {Name:mk5e443394ae5aad1f8a260b1850e1fb982d7f20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:21.430545   88492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-80428/.minikube/ca.key ...
	I0920 17:52:21.430562   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/ca.key: {Name:mk0ff0e0db481e2ed6fec0bd338a68eb71b83d77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:21.430675   88492 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-80428/.minikube/proxy-client-ca.key
	I0920 17:52:21.699556   88492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-80428/.minikube/proxy-client-ca.crt ...
	I0920 17:52:21.699594   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/proxy-client-ca.crt: {Name:mk2ddc389372d268964f64a71965bbcbb94d0f0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:21.699781   88492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-80428/.minikube/proxy-client-ca.key ...
	I0920 17:52:21.699796   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/proxy-client-ca.key: {Name:mk8402f645feabb82aebe12a1952ccb71d11f11c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:21.699899   88492 certs.go:256] generating profile certs ...
	I0920 17:52:21.699976   88492 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.key
	I0920 17:52:21.699996   88492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt with IP's: []
	I0920 17:52:22.025144   88492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt ...
	I0920 17:52:22.025187   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: {Name:mk7cdc9376b6b3bc345745137cb18208dbf81207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:22.025397   88492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.key ...
	I0920 17:52:22.025413   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.key: {Name:mk720366cbbaa343bf958f43b75374393fb9aedb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:22.025526   88492 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.key.1324324d
	I0920 17:52:22.025554   88492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.crt.1324324d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 17:52:22.231140   88492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.crt.1324324d ...
	I0920 17:52:22.231176   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.crt.1324324d: {Name:mk72bab60aa77861561c6ffa38773706b42e587a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:22.231365   88492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.key.1324324d ...
	I0920 17:52:22.231385   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.key.1324324d: {Name:mkacb39b23b6711c3abfca8b58c3b9184559026e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:22.231496   88492 certs.go:381] copying /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.crt.1324324d -> /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.crt
	I0920 17:52:22.231614   88492 certs.go:385] copying /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.key.1324324d -> /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.key
	I0920 17:52:22.231691   88492 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/proxy-client.key
	I0920 17:52:22.231718   88492 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/proxy-client.crt with IP's: []
	I0920 17:52:22.366530   88492 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/proxy-client.crt ...
	I0920 17:52:22.366562   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/proxy-client.crt: {Name:mkd9d40cc03603237325998d49672fbb9dc3b3e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:22.366742   88492 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/proxy-client.key ...
	I0920 17:52:22.366760   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/proxy-client.key: {Name:mk0c87f43062928cdcf9b1de07036d2301d351b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:22.366977   88492 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-80428/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 17:52:22.367020   88492 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-80428/.minikube/certs/ca.pem (1078 bytes)
	I0920 17:52:22.367055   88492 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-80428/.minikube/certs/cert.pem (1123 bytes)
	I0920 17:52:22.367085   88492 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-80428/.minikube/certs/key.pem (1675 bytes)
	I0920 17:52:22.367716   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 17:52:22.388940   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0920 17:52:22.409088   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 17:52:22.428794   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 17:52:22.448968   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 17:52:22.468513   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 17:52:22.487657   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 17:52:22.507411   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 17:52:22.527050   88492 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-80428/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 17:52:22.546600   88492 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 17:52:22.561115   88492 ssh_runner.go:195] Run: openssl version
	I0920 17:52:22.565718   88492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 17:52:22.573294   88492 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:52:22.576127   88492 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 17:52 /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:52:22.576171   88492 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 17:52:22.581899   88492 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 17:52:22.589550   88492 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 17:52:22.592316   88492 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 17:52:22.592358   88492 kubeadm.go:392] StartCluster: {Name:addons-535596 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-535596 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:52:22.592462   88492 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 17:52:22.608376   88492 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 17:52:22.615726   88492 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 17:52:22.622846   88492 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 17:52:22.622881   88492 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 17:52:22.629895   88492 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 17:52:22.629911   88492 kubeadm.go:157] found existing configuration files:
	
	I0920 17:52:22.629940   88492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 17:52:22.636754   88492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 17:52:22.636789   88492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 17:52:22.643574   88492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 17:52:22.650645   88492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 17:52:22.650690   88492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 17:52:22.657352   88492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 17:52:22.664329   88492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 17:52:22.664364   88492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 17:52:22.671173   88492 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 17:52:22.677943   88492 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 17:52:22.677978   88492 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 17:52:22.684738   88492 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 17:52:22.717219   88492 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 17:52:22.717290   88492 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 17:52:22.736346   88492 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 17:52:22.736427   88492 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0920 17:52:22.736519   88492 kubeadm.go:310] OS: Linux
	I0920 17:52:22.736609   88492 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 17:52:22.736678   88492 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 17:52:22.736760   88492 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 17:52:22.736825   88492 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 17:52:22.736896   88492 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 17:52:22.736971   88492 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 17:52:22.737035   88492 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 17:52:22.737117   88492 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 17:52:22.737213   88492 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 17:52:22.785636   88492 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 17:52:22.785773   88492 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 17:52:22.785900   88492 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 17:52:22.796130   88492 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 17:52:22.798976   88492 out.go:235]   - Generating certificates and keys ...
	I0920 17:52:22.799069   88492 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 17:52:22.799143   88492 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 17:52:22.928213   88492 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 17:52:23.103104   88492 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 17:52:23.277393   88492 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 17:52:23.725844   88492 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 17:52:23.983220   88492 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 17:52:23.983377   88492 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-535596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 17:52:24.081977   88492 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 17:52:24.082092   88492 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-535596 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 17:52:24.259697   88492 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 17:52:24.394050   88492 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 17:52:24.487299   88492 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 17:52:24.487369   88492 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 17:52:24.553790   88492 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 17:52:24.801699   88492 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 17:52:24.966853   88492 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 17:52:25.134831   88492 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 17:52:25.242717   88492 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 17:52:25.243115   88492 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 17:52:25.246621   88492 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 17:52:25.248899   88492 out.go:235]   - Booting up control plane ...
	I0920 17:52:25.249012   88492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 17:52:25.249119   88492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 17:52:25.249764   88492 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 17:52:25.258649   88492 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 17:52:25.263521   88492 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 17:52:25.263582   88492 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 17:52:25.351515   88492 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 17:52:25.351661   88492 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 17:52:25.853011   88492 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.604844ms
	I0920 17:52:25.853152   88492 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 17:52:30.354987   88492 kubeadm.go:310] [api-check] The API server is healthy after 4.501918111s
	I0920 17:52:30.365339   88492 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 17:52:30.375464   88492 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 17:52:30.392921   88492 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 17:52:30.393165   88492 kubeadm.go:310] [mark-control-plane] Marking the node addons-535596 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 17:52:30.399547   88492 kubeadm.go:310] [bootstrap-token] Using token: afv6cb.wjuz9eoem19xvicp
	I0920 17:52:30.400740   88492 out.go:235]   - Configuring RBAC rules ...
	I0920 17:52:30.400905   88492 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 17:52:30.403642   88492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 17:52:30.409345   88492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 17:52:30.411553   88492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 17:52:30.414015   88492 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 17:52:30.416182   88492 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 17:52:30.761200   88492 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 17:52:31.181139   88492 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 17:52:31.760791   88492 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 17:52:31.761693   88492 kubeadm.go:310] 
	I0920 17:52:31.761755   88492 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 17:52:31.761784   88492 kubeadm.go:310] 
	I0920 17:52:31.761907   88492 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 17:52:31.761919   88492 kubeadm.go:310] 
	I0920 17:52:31.761955   88492 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 17:52:31.762034   88492 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 17:52:31.762111   88492 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 17:52:31.762128   88492 kubeadm.go:310] 
	I0920 17:52:31.762197   88492 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 17:52:31.762204   88492 kubeadm.go:310] 
	I0920 17:52:31.762249   88492 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 17:52:31.762255   88492 kubeadm.go:310] 
	I0920 17:52:31.762298   88492 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 17:52:31.762371   88492 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 17:52:31.762431   88492 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 17:52:31.762437   88492 kubeadm.go:310] 
	I0920 17:52:31.762562   88492 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 17:52:31.762635   88492 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 17:52:31.762641   88492 kubeadm.go:310] 
	I0920 17:52:31.762746   88492 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token afv6cb.wjuz9eoem19xvicp \
	I0920 17:52:31.762843   88492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:37aeac015b190bbca60b6598975ad964cc26cfb7a61a8664c43dd558ad786070 \
	I0920 17:52:31.762866   88492 kubeadm.go:310] 	--control-plane 
	I0920 17:52:31.762873   88492 kubeadm.go:310] 
	I0920 17:52:31.762943   88492 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 17:52:31.762950   88492 kubeadm.go:310] 
	I0920 17:52:31.763026   88492 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token afv6cb.wjuz9eoem19xvicp \
	I0920 17:52:31.763120   88492 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:37aeac015b190bbca60b6598975ad964cc26cfb7a61a8664c43dd558ad786070 
	I0920 17:52:31.765411   88492 kubeadm.go:310] W0920 17:52:22.714959    1921 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:52:31.765763   88492 kubeadm.go:310] W0920 17:52:22.715496    1921 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 17:52:31.765950   88492 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0920 17:52:31.766105   88492 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 17:52:31.766131   88492 cni.go:84] Creating CNI manager for ""
	I0920 17:52:31.766151   88492 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 17:52:31.767960   88492 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 17:52:31.769415   88492 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 17:52:31.777580   88492 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 17:52:31.794935   88492 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 17:52:31.795009   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:31.795103   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-535596 minikube.k8s.io/updated_at=2024_09_20T17_52_31_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-535596 minikube.k8s.io/primary=true
	I0920 17:52:31.863609   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:31.863729   88492 ops.go:34] apiserver oom_adj: -16
	I0920 17:52:32.363706   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:32.864337   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:33.364594   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:33.864436   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:34.364009   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:34.863871   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:35.363752   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:35.864466   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:36.364078   88492 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 17:52:36.427251   88492 kubeadm.go:1113] duration metric: took 4.63230148s to wait for elevateKubeSystemPrivileges
	I0920 17:52:36.427288   88492 kubeadm.go:394] duration metric: took 13.834935798s to StartCluster
	I0920 17:52:36.427309   88492 settings.go:142] acquiring lock: {Name:mk07ca267b6900dd07f138f20d0dcf257cf243ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:36.427451   88492 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-80428/kubeconfig
	I0920 17:52:36.427915   88492 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/kubeconfig: {Name:mk8ced1f9edc9f12273ef466e1d923169239757a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:52:36.428104   88492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 17:52:36.428126   88492 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 17:52:36.428179   88492 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 17:52:36.428265   88492 addons.go:69] Setting yakd=true in profile "addons-535596"
	I0920 17:52:36.428297   88492 addons.go:69] Setting gcp-auth=true in profile "addons-535596"
	I0920 17:52:36.428318   88492 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-535596"
	I0920 17:52:36.428325   88492 addons.go:69] Setting cloud-spanner=true in profile "addons-535596"
	I0920 17:52:36.428343   88492 config.go:182] Loaded profile config "addons-535596": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:52:36.428352   88492 addons.go:234] Setting addon cloud-spanner=true in "addons-535596"
	I0920 17:52:36.428359   88492 mustload.go:65] Loading cluster: addons-535596
	I0920 17:52:36.428369   88492 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-535596"
	I0920 17:52:36.428389   88492 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-535596"
	I0920 17:52:36.428395   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.428398   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.428411   88492 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-535596"
	I0920 17:52:36.428389   88492 addons.go:69] Setting default-storageclass=true in profile "addons-535596"
	I0920 17:52:36.428425   88492 addons.go:69] Setting registry=true in profile "addons-535596"
	I0920 17:52:36.428441   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.428442   88492 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-535596"
	I0920 17:52:36.428449   88492 addons.go:234] Setting addon registry=true in "addons-535596"
	I0920 17:52:36.428470   88492 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-535596"
	I0920 17:52:36.428484   88492 addons.go:69] Setting volcano=true in profile "addons-535596"
	I0920 17:52:36.428491   88492 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-535596"
	I0920 17:52:36.428495   88492 addons.go:234] Setting addon volcano=true in "addons-535596"
	I0920 17:52:36.428512   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.428562   88492 config.go:182] Loaded profile config "addons-535596": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 17:52:36.428689   88492 addons.go:69] Setting storage-provisioner=true in profile "addons-535596"
	I0920 17:52:36.428739   88492 addons.go:234] Setting addon storage-provisioner=true in "addons-535596"
	I0920 17:52:36.428768   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.428783   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.428799   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.428807   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.428915   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.428929   88492 addons.go:69] Setting ingress=true in profile "addons-535596"
	I0920 17:52:36.428931   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.428947   88492 addons.go:234] Setting addon ingress=true in "addons-535596"
	I0920 17:52:36.428986   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.429144   88492 addons.go:69] Setting volumesnapshots=true in profile "addons-535596"
	I0920 17:52:36.429166   88492 addons.go:234] Setting addon volumesnapshots=true in "addons-535596"
	I0920 17:52:36.429189   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.428990   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.429358   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.429658   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.429687   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.428916   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.429010   88492 addons.go:69] Setting inspektor-gadget=true in profile "addons-535596"
	I0920 17:52:36.429970   88492 addons.go:234] Setting addon inspektor-gadget=true in "addons-535596"
	I0920 17:52:36.429001   88492 addons.go:69] Setting ingress-dns=true in profile "addons-535596"
	I0920 17:52:36.430068   88492 addons.go:234] Setting addon ingress-dns=true in "addons-535596"
	I0920 17:52:36.430160   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.429017   88492 addons.go:69] Setting metrics-server=true in profile "addons-535596"
	I0920 17:52:36.430345   88492 addons.go:234] Setting addon metrics-server=true in "addons-535596"
	I0920 17:52:36.430385   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.430109   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.430870   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.430917   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.431714   88492 out.go:177] * Verifying Kubernetes components...
	I0920 17:52:36.428476   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.432591   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.434805   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.435050   88492 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 17:52:36.428413   88492 addons.go:234] Setting addon yakd=true in "addons-535596"
	I0920 17:52:36.435320   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.435866   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.459182   88492 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 17:52:36.461343   88492 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 17:52:36.464883   88492 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 17:52:36.465111   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.468019   88492 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 17:52:36.468137   88492 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 17:52:36.468192   88492 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 17:52:36.470726   88492 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 17:52:36.470747   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 17:52:36.470806   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.471979   88492 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 17:52:36.472120   88492 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:52:36.472141   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 17:52:36.472189   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.473184   88492 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:52:36.473200   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 17:52:36.473242   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.473464   88492 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 17:52:36.474230   88492 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 17:52:36.475366   88492 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 17:52:36.475383   88492 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 17:52:36.475427   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.476484   88492 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 17:52:36.477666   88492 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 17:52:36.477891   88492 addons.go:234] Setting addon default-storageclass=true in "addons-535596"
	I0920 17:52:36.477938   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.478415   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.480998   88492 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-535596"
	I0920 17:52:36.481048   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:36.481501   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:36.483114   88492 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 17:52:36.484353   88492 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 17:52:36.484376   88492 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 17:52:36.484424   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.502543   88492 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 17:52:36.504581   88492 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 17:52:36.506139   88492 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 17:52:36.509317   88492 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:52:36.509349   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 17:52:36.509418   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.516277   88492 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:52:36.516284   88492 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 17:52:36.518128   88492 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 17:52:36.518150   88492 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 17:52:36.518217   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.519198   88492 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 17:52:36.522626   88492 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:52:36.523987   88492 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:52:36.524010   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 17:52:36.524077   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.530007   88492 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 17:52:36.532204   88492 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 17:52:36.532234   88492 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 17:52:36.532307   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.539134   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.539632   88492 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 17:52:36.541162   88492 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 17:52:36.541198   88492 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 17:52:36.541553   88492 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 17:52:36.543143   88492 out.go:177]   - Using image docker.io/busybox:stable
	I0920 17:52:36.543205   88492 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 17:52:36.543217   88492 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 17:52:36.543279   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.543518   88492 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 17:52:36.543535   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 17:52:36.543582   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.544325   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.545321   88492 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:52:36.545340   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 17:52:36.545381   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.555485   88492 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 17:52:36.558247   88492 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:52:36.558296   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 17:52:36.558489   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.564036   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.564461   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.567264   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.574749   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.586829   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.588365   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.593057   88492 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 17:52:36.593123   88492 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 17:52:36.593203   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:36.599921   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.606448   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.607983   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.609355   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.611427   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:36.612848   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	W0920 17:52:36.634967   88492 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0920 17:52:36.635011   88492 retry.go:31] will retry after 190.296317ms: ssh: handshake failed: EOF
	I0920 17:52:36.743858   88492 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 17:52:36.743918   88492 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 17:52:36.853641   88492 node_ready.go:35] waiting up to 6m0s for node "addons-535596" to be "Ready" ...
	I0920 17:52:36.933775   88492 node_ready.go:49] node "addons-535596" has status "Ready":"True"
	I0920 17:52:36.933803   88492 node_ready.go:38] duration metric: took 80.124475ms for node "addons-535596" to be "Ready" ...
	I0920 17:52:36.933815   88492 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:52:36.945090   88492 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace to be "Ready" ...
	I0920 17:52:37.032930   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 17:52:37.041553   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 17:52:37.047266   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 17:52:37.048108   88492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 17:52:37.048176   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 17:52:37.056487   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 17:52:37.131072   88492 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 17:52:37.131159   88492 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 17:52:37.140390   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 17:52:37.146921   88492 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 17:52:37.146951   88492 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 17:52:37.236399   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 17:52:37.236724   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 17:52:37.332274   88492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 17:52:37.332356   88492 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 17:52:37.340970   88492 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 17:52:37.341064   88492 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 17:52:37.343621   88492 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 17:52:37.343695   88492 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 17:52:37.343942   88492 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 17:52:37.343988   88492 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 17:52:37.431473   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 17:52:37.454447   88492 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:52:37.454479   88492 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 17:52:37.550233   88492 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 17:52:37.550338   88492 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 17:52:37.638377   88492 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:52:37.638407   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 17:52:37.648969   88492 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 17:52:37.649047   88492 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 17:52:37.732302   88492 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 17:52:37.732391   88492 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 17:52:37.755029   88492 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 17:52:37.755111   88492 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 17:52:37.845819   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 17:52:38.138025   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 17:52:38.237331   88492 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 17:52:38.237424   88492 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 17:52:38.432839   88492 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 17:52:38.432931   88492 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 17:52:38.450040   88492 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 17:52:38.450141   88492 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 17:52:38.633831   88492 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 17:52:38.633868   88492 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 17:52:38.937195   88492 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 17:52:38.937289   88492 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 17:52:38.952537   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace has status "Ready":"False"
	I0920 17:52:39.032784   88492 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:52:39.032815   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 17:52:39.038888   88492 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 17:52:39.038912   88492 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 17:52:39.248892   88492 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.504924752s)
	I0920 17:52:39.248986   88492 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 17:52:39.255011   88492 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:52:39.255110   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 17:52:39.349510   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.31647492s)
	I0920 17:52:39.432625   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 17:52:39.553605   88492 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 17:52:39.553636   88492 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 17:52:39.631260   88492 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 17:52:39.631348   88492 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 17:52:39.636742   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:52:39.832245   88492 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-535596" context rescaled to 1 replicas
	I0920 17:52:39.843523   88492 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 17:52:39.843570   88492 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 17:52:39.944543   88492 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 17:52:39.944639   88492 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 17:52:40.236379   88492 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 17:52:40.236419   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 17:52:40.347375   88492 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:52:40.347408   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 17:52:40.453557   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.411909192s)
	I0920 17:52:40.737044   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 17:52:40.748925   88492 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 17:52:40.749016   88492 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 17:52:41.043134   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace has status "Ready":"False"
	I0920 17:52:41.133485   88492 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 17:52:41.133516   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 17:52:42.044960   88492 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 17:52:42.045002   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 17:52:42.334071   88492 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:52:42.334171   88492 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 17:52:42.735185   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 17:52:43.541335   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace has status "Ready":"False"
	I0920 17:52:43.543745   88492 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 17:52:43.543910   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:43.563637   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:44.351913   88492 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 17:52:44.737340   88492 addons.go:234] Setting addon gcp-auth=true in "addons-535596"
	I0920 17:52:44.737402   88492 host.go:66] Checking if "addons-535596" exists ...
	I0920 17:52:44.737896   88492 cli_runner.go:164] Run: docker container inspect addons-535596 --format={{.State.Status}}
	I0920 17:52:44.763241   88492 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 17:52:44.763285   88492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-535596
	I0920 17:52:44.778434   88492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/addons-535596/id_rsa Username:docker}
	I0920 17:52:45.956512   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace has status "Ready":"False"
	I0920 17:52:46.048630   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.992105422s)
	I0920 17:52:46.048703   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.001349883s)
	I0920 17:52:46.048719   88492 addons.go:475] Verifying addon ingress=true in "addons-535596"
	I0920 17:52:46.048744   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.908317407s)
	I0920 17:52:46.050584   88492 out.go:177] * Verifying ingress addon...
	I0920 17:52:46.056723   88492 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 17:52:46.232851   88492 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 17:52:46.232952   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:46.635957   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:47.143304   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:47.637622   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:48.137591   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:48.536192   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace has status "Ready":"False"
	I0920 17:52:48.636369   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:48.642828   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.406333513s)
	I0920 17:52:48.642967   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.40617626s)
	I0920 17:52:48.643013   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (11.211433989s)
	I0920 17:52:48.643073   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.797215736s)
	I0920 17:52:48.643098   88492 addons.go:475] Verifying addon metrics-server=true in "addons-535596"
	I0920 17:52:48.643160   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.505105361s)
	I0920 17:52:48.643178   88492 addons.go:475] Verifying addon registry=true in "addons-535596"
	I0920 17:52:48.643307   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.210586378s)
	I0920 17:52:48.643532   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.006743665s)
	W0920 17:52:48.643568   88492 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:52:48.643590   88492 retry.go:31] will retry after 184.641993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 17:52:48.643669   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.90651408s)
	I0920 17:52:48.644962   88492 out.go:177] * Verifying registry addon...
	I0920 17:52:48.644966   88492 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-535596 service yakd-dashboard -n yakd-dashboard
	
	I0920 17:52:48.647773   88492 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 17:52:48.734632   88492 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 17:52:48.734731   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:48.829285   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 17:52:49.145435   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:49.152322   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:49.561386   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:49.732361   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:50.057644   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.322398308s)
	I0920 17:52:50.057747   88492 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-535596"
	I0920 17:52:50.057674   88492 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.294406635s)
	I0920 17:52:50.059228   88492 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 17:52:50.059240   88492 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 17:52:50.060908   88492 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 17:52:50.061649   88492 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 17:52:50.062247   88492 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 17:52:50.062264   88492 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 17:52:50.062737   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:50.147425   88492 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 17:52:50.147455   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:50.150584   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:50.159717   88492 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 17:52:50.159742   88492 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 17:52:50.245634   88492 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:52:50.245656   88492 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 17:52:50.348795   88492 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 17:52:50.561793   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:50.565630   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:50.652534   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:50.952499   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace has status "Ready":"False"
	I0920 17:52:51.133142   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:51.135950   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:51.152125   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:51.445149   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.615810378s)
	I0920 17:52:51.452919   88492 pod_ready.go:98] pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 17:52:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 17:52:39 +0000 UTC,FinishedAt:2024-09-20 17:52:50 +0000 UTC,ContainerID:docker://b1247c916dde2c7df1a0e845607f204dddaba9091249111de62fad48cb68a13f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://b1247c916dde2c7df1a0e845607f204dddaba9091249111de62fad48cb68a13f Started:0xc0021dc1f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020d4a70} {Name:kube-api-access-lwq5w MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc0020d4a80}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 17:52:51.452952   88492 pod_ready.go:82] duration metric: took 14.507771818s for pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace to be "Ready" ...
	E0920 17:52:51.452966   88492 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-cclxq" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:36 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 17:52:36 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-20 17:52:36 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 17:52:39 +0000 UTC,FinishedAt:2024-09-20 17:52:50 +0000 UTC,ContainerID:docker://b1247c916dde2c7df1a0e845607f204dddaba9091249111de62fad48cb68a13f,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://b1247c916dde2c7df1a0e845607f204dddaba9091249111de62fad48cb68a13f Started:0xc0021dc1f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc0020d4a70} {Name:kube-api-access-lwq5w MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc0020d4a80}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 17:52:51.452981   88492 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace to be "Ready" ...
	I0920 17:52:51.560933   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:51.660433   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:51.661432   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:51.736580   88492 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.387744004s)
	I0920 17:52:51.739490   88492 addons.go:475] Verifying addon gcp-auth=true in "addons-535596"
	I0920 17:52:51.741725   88492 out.go:177] * Verifying gcp-auth addon...
	I0920 17:52:51.743833   88492 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 17:52:51.760391   88492 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:52:52.060937   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:52.065708   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:52.151970   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:52.561317   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:52.566273   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:52.651632   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:53.061737   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:53.065332   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:53.162264   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:53.458676   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:52:53.561731   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:53.565457   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:53.651894   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:54.061606   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:54.066163   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:54.161304   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:54.561564   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:54.565178   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:54.651726   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:55.063869   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:55.067260   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:55.151403   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:55.462626   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:52:55.561441   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:55.565930   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:55.651463   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:56.061818   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:56.065260   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:56.151618   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:56.561657   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:56.565335   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:56.652010   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:57.060810   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:57.066371   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:57.151545   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:57.561211   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:57.565765   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:57.651126   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:57.958714   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:52:58.061935   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:58.065437   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:58.151951   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:58.560704   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:58.564828   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:58.650803   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:59.061824   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:59.066094   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:59.152561   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:52:59.562273   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:52:59.565901   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:52:59.651401   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:00.061384   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:00.065864   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:00.151279   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:00.458788   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:53:00.562104   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:00.565716   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:00.651865   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:01.061790   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:01.065516   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:01.151877   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:01.562257   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:01.565668   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:01.652281   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:02.063614   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:02.065769   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:02.152516   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:02.458976   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:53:02.561856   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:02.565502   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:02.652095   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:03.061814   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:03.065599   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:03.152152   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:03.560572   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:03.565321   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:03.651537   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:04.060966   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:04.065459   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:04.151798   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:04.560769   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:04.565662   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:04.651630   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:04.958708   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:53:05.061674   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:05.065323   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:05.161988   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:05.561978   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:05.565835   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:05.651415   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:06.061935   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:06.135370   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:06.151123   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:06.561272   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:06.565541   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:06.651803   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:07.061384   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:07.066024   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:07.151495   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:07.460249   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:53:07.561341   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:07.565809   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:07.651038   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:08.061245   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:08.065572   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:08.160458   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:08.560666   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:08.567633   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:08.651763   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:09.061646   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:09.065968   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:09.152396   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:09.562084   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:09.565631   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:09.651888   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:09.958675   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:53:10.061398   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:10.066541   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:10.151725   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:10.560966   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:10.566013   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:10.651375   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:11.061258   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:11.066113   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:11.151035   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:11.561857   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:11.565411   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:11.651923   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:12.061680   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:12.065786   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:12.151590   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 17:53:12.457996   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:53:12.561082   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:12.565539   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:12.661149   88492 kapi.go:107] duration metric: took 24.013374214s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 17:53:13.060770   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:13.066136   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:13.561779   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:13.565599   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:14.061302   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:14.066490   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:14.458474   88492 pod_ready.go:103] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"False"
	I0920 17:53:14.560953   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:14.565563   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:15.061859   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:15.065342   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:15.593375   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:15.593534   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:15.958399   88492 pod_ready.go:93] pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace has status "Ready":"True"
	I0920 17:53:15.958420   88492 pod_ready.go:82] duration metric: took 24.505430449s for pod "coredns-7c65d6cfc9-kb66b" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:15.958429   88492 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-535596" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:15.962147   88492 pod_ready.go:93] pod "etcd-addons-535596" in "kube-system" namespace has status "Ready":"True"
	I0920 17:53:15.962163   88492 pod_ready.go:82] duration metric: took 3.728887ms for pod "etcd-addons-535596" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:15.962171   88492 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-535596" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:15.966017   88492 pod_ready.go:93] pod "kube-apiserver-addons-535596" in "kube-system" namespace has status "Ready":"True"
	I0920 17:53:15.966031   88492 pod_ready.go:82] duration metric: took 3.853913ms for pod "kube-apiserver-addons-535596" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:15.966038   88492 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-535596" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:15.969406   88492 pod_ready.go:93] pod "kube-controller-manager-addons-535596" in "kube-system" namespace has status "Ready":"True"
	I0920 17:53:15.969420   88492 pod_ready.go:82] duration metric: took 3.376211ms for pod "kube-controller-manager-addons-535596" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:15.969427   88492 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5rlh4" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:15.972826   88492 pod_ready.go:93] pod "kube-proxy-5rlh4" in "kube-system" namespace has status "Ready":"True"
	I0920 17:53:15.972844   88492 pod_ready.go:82] duration metric: took 3.411064ms for pod "kube-proxy-5rlh4" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:15.972854   88492 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-535596" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:16.061411   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:16.065405   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:16.356978   88492 pod_ready.go:93] pod "kube-scheduler-addons-535596" in "kube-system" namespace has status "Ready":"True"
	I0920 17:53:16.357005   88492 pod_ready.go:82] duration metric: took 384.142023ms for pod "kube-scheduler-addons-535596" in "kube-system" namespace to be "Ready" ...
	I0920 17:53:16.357014   88492 pod_ready.go:39] duration metric: took 39.423185128s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 17:53:16.357045   88492 api_server.go:52] waiting for apiserver process to appear ...
	I0920 17:53:16.357109   88492 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 17:53:16.375279   88492 api_server.go:72] duration metric: took 39.947117977s to wait for apiserver process to appear ...
	I0920 17:53:16.375307   88492 api_server.go:88] waiting for apiserver healthz status ...
	I0920 17:53:16.375336   88492 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 17:53:16.381130   88492 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 17:53:16.382117   88492 api_server.go:141] control plane version: v1.31.1
	I0920 17:53:16.382141   88492 api_server.go:131] duration metric: took 6.821132ms to wait for apiserver health ...
	I0920 17:53:16.382151   88492 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 17:53:16.561208   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:16.563772   88492 system_pods.go:59] 17 kube-system pods found
	I0920 17:53:16.563807   88492 system_pods.go:61] "coredns-7c65d6cfc9-kb66b" [0bd77b6b-a435-4bb4-a6a7-1552b9274413] Running
	I0920 17:53:16.563822   88492 system_pods.go:61] "csi-hostpath-attacher-0" [4e4a13a8-fe1d-46c2-91f3-9bbd4878076d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:53:16.563833   88492 system_pods.go:61] "csi-hostpath-resizer-0" [fe8cd22d-6310-4e88-9cc9-92aa3ac59d45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:53:16.563848   88492 system_pods.go:61] "csi-hostpathplugin-twd6q" [13775150-b68f-4d87-a002-e168d03b9dd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:53:16.563859   88492 system_pods.go:61] "etcd-addons-535596" [3f4d79a3-f5d6-4175-9767-cfcb52693e51] Running
	I0920 17:53:16.563865   88492 system_pods.go:61] "kube-apiserver-addons-535596" [17fe9f97-0de0-40d6-aba0-02940d16ab09] Running
	I0920 17:53:16.563871   88492 system_pods.go:61] "kube-controller-manager-addons-535596" [9ccdf5d3-acf2-4b9e-b1a5-f15b719c626b] Running
	I0920 17:53:16.563878   88492 system_pods.go:61] "kube-ingress-dns-minikube" [cc5e8296-28ac-419c-a13b-4f7d80cce4a6] Running
	I0920 17:53:16.563883   88492 system_pods.go:61] "kube-proxy-5rlh4" [3c988b46-3034-49a5-ab41-a646ea47b63c] Running
	I0920 17:53:16.563888   88492 system_pods.go:61] "kube-scheduler-addons-535596" [2202cf2b-035a-4b1f-866a-27a3ee4b432f] Running
	I0920 17:53:16.563893   88492 system_pods.go:61] "metrics-server-84c5f94fbc-gtp88" [a1ca81cf-49a8-4226-814b-5471bd80feb6] Running
	I0920 17:53:16.563899   88492 system_pods.go:61] "nvidia-device-plugin-daemonset-2qwl9" [8548f3fc-ad81-478f-ad1a-1f23a856925c] Running
	I0920 17:53:16.563906   88492 system_pods.go:61] "registry-66c9cd494c-tl9b5" [0da9eb92-a72a-4e20-97a3-ff9fecea622f] Running
	I0920 17:53:16.563911   88492 system_pods.go:61] "registry-proxy-xldsf" [2d465593-59aa-4922-8d50-d95af40b4d34] Running
	I0920 17:53:16.563947   88492 system_pods.go:61] "snapshot-controller-56fcc65765-2tnds" [85a904ca-e1db-4211-b135-1955518d5246] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:53:16.563959   88492 system_pods.go:61] "snapshot-controller-56fcc65765-vm65r" [dbe43e0b-fb2d-4702-a4c9-b57ef880c114] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:53:16.563967   88492 system_pods.go:61] "storage-provisioner" [1946c3dd-7ddb-413c-9a67-cf09d5d76254] Running
	I0920 17:53:16.563978   88492 system_pods.go:74] duration metric: took 181.817765ms to wait for pod list to return data ...
	I0920 17:53:16.563991   88492 default_sa.go:34] waiting for default service account to be created ...
	I0920 17:53:16.565881   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:16.756142   88492 default_sa.go:45] found service account: "default"
	I0920 17:53:16.756171   88492 default_sa.go:55] duration metric: took 192.169239ms for default service account to be created ...
	I0920 17:53:16.756182   88492 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 17:53:16.962351   88492 system_pods.go:86] 17 kube-system pods found
	I0920 17:53:16.962385   88492 system_pods.go:89] "coredns-7c65d6cfc9-kb66b" [0bd77b6b-a435-4bb4-a6a7-1552b9274413] Running
	I0920 17:53:16.962398   88492 system_pods.go:89] "csi-hostpath-attacher-0" [4e4a13a8-fe1d-46c2-91f3-9bbd4878076d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 17:53:16.962409   88492 system_pods.go:89] "csi-hostpath-resizer-0" [fe8cd22d-6310-4e88-9cc9-92aa3ac59d45] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 17:53:16.962420   88492 system_pods.go:89] "csi-hostpathplugin-twd6q" [13775150-b68f-4d87-a002-e168d03b9dd3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 17:53:16.962431   88492 system_pods.go:89] "etcd-addons-535596" [3f4d79a3-f5d6-4175-9767-cfcb52693e51] Running
	I0920 17:53:16.962438   88492 system_pods.go:89] "kube-apiserver-addons-535596" [17fe9f97-0de0-40d6-aba0-02940d16ab09] Running
	I0920 17:53:16.962447   88492 system_pods.go:89] "kube-controller-manager-addons-535596" [9ccdf5d3-acf2-4b9e-b1a5-f15b719c626b] Running
	I0920 17:53:16.962454   88492 system_pods.go:89] "kube-ingress-dns-minikube" [cc5e8296-28ac-419c-a13b-4f7d80cce4a6] Running
	I0920 17:53:16.962460   88492 system_pods.go:89] "kube-proxy-5rlh4" [3c988b46-3034-49a5-ab41-a646ea47b63c] Running
	I0920 17:53:16.962468   88492 system_pods.go:89] "kube-scheduler-addons-535596" [2202cf2b-035a-4b1f-866a-27a3ee4b432f] Running
	I0920 17:53:16.962474   88492 system_pods.go:89] "metrics-server-84c5f94fbc-gtp88" [a1ca81cf-49a8-4226-814b-5471bd80feb6] Running
	I0920 17:53:16.962482   88492 system_pods.go:89] "nvidia-device-plugin-daemonset-2qwl9" [8548f3fc-ad81-478f-ad1a-1f23a856925c] Running
	I0920 17:53:16.962488   88492 system_pods.go:89] "registry-66c9cd494c-tl9b5" [0da9eb92-a72a-4e20-97a3-ff9fecea622f] Running
	I0920 17:53:16.962495   88492 system_pods.go:89] "registry-proxy-xldsf" [2d465593-59aa-4922-8d50-d95af40b4d34] Running
	I0920 17:53:16.962513   88492 system_pods.go:89] "snapshot-controller-56fcc65765-2tnds" [85a904ca-e1db-4211-b135-1955518d5246] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:53:16.962524   88492 system_pods.go:89] "snapshot-controller-56fcc65765-vm65r" [dbe43e0b-fb2d-4702-a4c9-b57ef880c114] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 17:53:16.962534   88492 system_pods.go:89] "storage-provisioner" [1946c3dd-7ddb-413c-9a67-cf09d5d76254] Running
	I0920 17:53:16.962544   88492 system_pods.go:126] duration metric: took 206.354908ms to wait for k8s-apps to be running ...
	I0920 17:53:16.962555   88492 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 17:53:16.962622   88492 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 17:53:16.976081   88492 system_svc.go:56] duration metric: took 13.516515ms WaitForService to wait for kubelet
	I0920 17:53:16.976108   88492 kubeadm.go:582] duration metric: took 40.547953038s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 17:53:16.976130   88492 node_conditions.go:102] verifying NodePressure condition ...
	I0920 17:53:17.061119   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:17.065940   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:17.157149   88492 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0920 17:53:17.157176   88492 node_conditions.go:123] node cpu capacity is 8
	I0920 17:53:17.157190   88492 node_conditions.go:105] duration metric: took 181.053829ms to run NodePressure ...
	I0920 17:53:17.157204   88492 start.go:241] waiting for startup goroutines ...
	I0920 17:53:17.561965   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:17.565602   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:18.061660   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:18.064848   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:18.562594   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:18.566612   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:19.062262   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:19.065823   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:19.562073   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:19.566190   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:20.062013   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:20.065759   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:20.561307   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:20.566074   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:21.061836   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:21.065525   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:21.561328   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:21.565679   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:22.061053   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:22.065176   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:22.560948   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:22.565955   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:23.062548   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:23.065902   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:23.562313   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:23.565646   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:24.060796   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:24.065719   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:24.561389   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:24.565955   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:25.061478   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:25.065813   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:25.562064   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:25.565508   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:26.061535   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:26.065199   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:26.561015   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:26.565273   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:27.060791   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:27.065112   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:27.562030   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:27.565618   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:28.061073   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:28.065916   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:28.561156   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:28.565333   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:29.061190   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:29.065443   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:29.561801   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:29.565160   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:30.070725   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:30.070987   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:30.561637   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:30.565955   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:31.062093   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:31.065891   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:31.562227   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:31.566385   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:32.062007   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:32.065732   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:32.560459   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:32.566005   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:33.062492   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:33.066317   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:33.560540   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:33.565023   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:34.060828   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:34.065654   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:34.561890   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:34.565489   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:35.061515   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:35.064882   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:35.561534   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:35.566040   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:36.061555   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:36.065502   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 17:53:36.561267   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:36.565539   88492 kapi.go:107] duration metric: took 46.503890088s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 17:53:37.060780   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:37.561176   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:38.060945   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:38.561469   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:39.060917   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:39.560817   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:40.061106   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:40.560813   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:41.062289   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:41.560862   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:42.060361   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:42.561304   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:43.061317   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:43.560628   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:44.060483   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:44.561090   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:45.060969   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:45.560567   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:46.060609   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:46.561273   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:47.060817   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:47.560369   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:48.060780   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:48.560258   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:49.060716   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:49.561256   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:50.061449   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:50.561868   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:51.061084   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:51.562022   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:52.061995   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:52.560780   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:53.061427   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:53.561548   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:54.061150   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:54.561503   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:55.062261   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:55.561103   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:56.062172   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:56.560952   88492 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 17:53:57.061457   88492 kapi.go:107] duration metric: took 1m11.004736014s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 17:54:15.248042   88492 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 17:54:15.248067   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:15.747388   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:16.247900   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:16.747020   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:17.247044   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:17.746811   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:18.247148   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:18.747350   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:19.247783   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:19.746633   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:20.248083   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:20.747416   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:21.248021   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:21.747882   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:22.248227   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:22.747711   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:23.247772   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:23.747238   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:24.246712   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:24.747819   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:25.246748   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:25.746869   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:26.247469   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:26.748065   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:27.247013   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:27.747101   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:28.247679   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:28.746493   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:29.248155   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:29.747132   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:30.247515   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:30.748065   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:31.247758   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:31.747613   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:32.247968   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:32.746999   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:33.246810   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:33.746938   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:34.247007   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:34.747111   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:35.247427   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:35.747444   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:36.248278   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:36.747403   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:37.247348   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:37.747370   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:38.247958   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:38.746678   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:39.247766   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:39.746612   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:40.248019   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:40.747329   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:41.248045   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:41.747332   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:42.247846   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:42.746787   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:43.246634   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:43.747652   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:44.248968   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:44.747016   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:45.247224   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:45.746904   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:46.247417   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:46.747129   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:47.246719   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:47.747714   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:48.248157   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:48.746894   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:49.246847   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:49.746609   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:50.247595   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:50.746882   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:51.247494   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:51.747855   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:52.246924   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:52.746874   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:53.246849   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:53.746877   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:54.246789   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:54.746913   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:55.247099   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:55.746776   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:56.247443   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:56.747972   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:57.246966   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:57.746872   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:58.247365   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:58.747429   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:59.247309   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:54:59.747007   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:00.247150   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:00.747483   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:01.247834   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:01.746905   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:02.247435   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:02.747566   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:03.247530   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:03.747767   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:04.247106   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:04.747068   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:05.247476   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:05.747233   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:06.248051   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:06.747605   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:07.247745   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:07.747752   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:08.248028   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:08.746951   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:09.247039   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:09.747101   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:10.247550   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:10.747952   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:11.247159   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:11.747613   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:12.248018   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:12.746786   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:13.246854   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:13.746938   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:14.247709   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:14.748013   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:15.247128   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:15.747148   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:16.247910   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:16.748124   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:17.247045   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:17.746882   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:18.247248   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:18.747076   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:19.247361   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:19.747782   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:20.248341   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:20.747804   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:21.247750   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:21.747288   88492 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 17:55:22.247343   88492 kapi.go:107] duration metric: took 2m30.503507279s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 17:55:22.248897   88492 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-535596 cluster.
	I0920 17:55:22.250039   88492 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 17:55:22.251121   88492 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 17:55:22.252297   88492 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, storage-provisioner-rancher, volcano, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 17:55:22.253401   88492 addons.go:510] duration metric: took 2m45.825223776s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner storage-provisioner-rancher volcano nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 17:55:22.253450   88492 start.go:246] waiting for cluster config update ...
	I0920 17:55:22.253472   88492 start.go:255] writing updated cluster config ...
	I0920 17:55:22.253728   88492 ssh_runner.go:195] Run: rm -f paused
	I0920 17:55:22.301022   88492 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 17:55:22.302908   88492 out.go:177] * Done! kubectl is now configured to use "addons-535596" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 18:05:06 addons-535596 dockerd[1340]: time="2024-09-20T18:05:06.657890758Z" level=info msg="ignoring event" container=514ed59ffcf76cb79bee735baa1d020cc575171b64cacfcd2797a7baf49d0bd3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.133838634Z" level=info msg="ignoring event" container=bc18a556a7c0a9748cd7f91603330f81d1e75030915bd48cbf7adc555e0682ee module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.231707660Z" level=info msg="ignoring event" container=3d0a8a73409412c1e969fcc91d7f8b95676b4c8b16fabada3f654e195716c05a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.234860317Z" level=info msg="ignoring event" container=0bc92f8f4b3230c6df15544dd67ced0b17aa3274491a3a2678e177f39f68d95b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.237126035Z" level=info msg="ignoring event" container=2bf6f6377c86e24144a536e75e387089ee6700f9fca2c558294e2574a9533980 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.241766113Z" level=info msg="ignoring event" container=dadc9c630bf0a8ba64573e908e55325cf2df76af9bc7856b69411b19e68e1d9c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.246901894Z" level=info msg="ignoring event" container=2a49133a7bc0fa34bf647c1922619baf79d5a88dab80ddb565fab7c40a9117f1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.246942616Z" level=info msg="ignoring event" container=f164a200ed53f9e3d99c389b94c7a8f449c74bb8f89b859054895a38909e3707 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.330916606Z" level=info msg="ignoring event" container=2e0cba76f7ec5a316f066aded9164b7e0ea9eac754e468ceb638026b3ff3f182 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.463572609Z" level=info msg="ignoring event" container=1a02e0b7091a33f5b6655cdec969fa7d031354db40ab401a2f72cb798e68851d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.524471720Z" level=info msg="ignoring event" container=e55c7406d79ab9a5b4e5b22c83da3815c11dd164a5f9826902e4f3de2fd6edc4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:08 addons-535596 dockerd[1340]: time="2024-09-20T18:05:08.555544583Z" level=info msg="ignoring event" container=bd3b6701bb24763d7bef1db2b34027848150e999864ed3f86ffd95ebd380e692 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:13 addons-535596 dockerd[1340]: time="2024-09-20T18:05:13.267097118Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=661f01decf06491b810e0810026f3befe9e1a5e0d93273f16c829372934384c7
	Sep 20 18:05:13 addons-535596 dockerd[1340]: time="2024-09-20T18:05:13.287854217Z" level=info msg="ignoring event" container=661f01decf06491b810e0810026f3befe9e1a5e0d93273f16c829372934384c7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:13 addons-535596 cri-dockerd[1604]: time="2024-09-20T18:05:13Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"local-path-provisioner-86d989889c-z6mvn_local-path-storage\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 20 18:05:13 addons-535596 dockerd[1340]: time="2024-09-20T18:05:13.413172343Z" level=info msg="ignoring event" container=7fd64108a3e725b893202b7f5edad78751d5f8ace6b7cb0d94321ac005992f69 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:14 addons-535596 dockerd[1340]: time="2024-09-20T18:05:14.552556016Z" level=info msg="ignoring event" container=b773606b289431230ad010f261dc9050e6d110ad5104ca1beb3990cb38e5a8e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:14 addons-535596 dockerd[1340]: time="2024-09-20T18:05:14.561215544Z" level=info msg="ignoring event" container=8c6fdf1abfea23913e37fcc75a263c9df8118fb74ef5ecb35648cd4d054aad1a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:14 addons-535596 dockerd[1340]: time="2024-09-20T18:05:14.775387235Z" level=info msg="ignoring event" container=81dc3ce6b8c95117e89f9f731ea06779ba198de823a5c23016d54aaba6312b8d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:14 addons-535596 dockerd[1340]: time="2024-09-20T18:05:14.798934558Z" level=info msg="ignoring event" container=d604cf6a230c0b1f2272976d9434e1645b9b3f00ee3ff1fb36234edfefe3c619 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:14 addons-535596 dockerd[1340]: time="2024-09-20T18:05:14.832829555Z" level=info msg="ignoring event" container=8c802b531aec88fab3182009e51446c1bee998ffb82bf7f0ad3a3fa7f4033479 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:15 addons-535596 dockerd[1340]: time="2024-09-20T18:05:15.241313442Z" level=info msg="ignoring event" container=a7e9fd10921f63ea91bd53cf3a86edb41530654b3ccdd6e7920705066edad9c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:15 addons-535596 dockerd[1340]: time="2024-09-20T18:05:15.294951512Z" level=info msg="ignoring event" container=623f3ce1dca201d48289eab0e7176e760aa1fb6aac7e4a2b6fa2634d542629dc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:15 addons-535596 dockerd[1340]: time="2024-09-20T18:05:15.361349607Z" level=info msg="ignoring event" container=9c5ef7d30b7e80e94298f50e76a7633400c7a8bd37f7352bc783d66063eee7f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 18:05:15 addons-535596 dockerd[1340]: time="2024-09-20T18:05:15.452604636Z" level=info msg="ignoring event" container=0394aebe80d7621654380d12941a2eafe2d345d77df5f6f03ebf13169df03f3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                          CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f25f327bb69da       a416a98b71e22                                                                                                  33 seconds ago       Exited              helper-pod                0                   0ceb2c877fba1       helper-pod-delete-pvc-1a02ad53-06eb-4e3d-819b-4c8d67dfc852
	68555f95d7c02       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                36 seconds ago       Exited              busybox                   0                   32e91526ac1d6       test-local-path
	db29515714b74       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                41 seconds ago       Exited              helper-pod                0                   e7a64638080ba       helper-pod-create-pvc-1a02ad53-06eb-4e3d-819b-4c8d67dfc852
	e0d5754ed09c7       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                    54 seconds ago       Running             hello-world-app           0                   e5897f01080e5       hello-world-app-55bf9c44b4-764dj
	8ae985aceac07       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                  About a minute ago   Running             nginx                     0                   09447d5a17b05       nginx
	312ef4918365d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb   9 minutes ago        Running             gcp-auth                  0                   68666b5c33ceb       gcp-auth-89d5ffd79-mjk69
	047dca7997b2e       6e38f40d628db                                                                                                  12 minutes ago       Running             storage-provisioner       0                   c6a00d1b88b33       storage-provisioner
	077cef7a7d94d       c69fa2e9cbf5f                                                                                                  12 minutes ago       Running             coredns                   0                   d0aaa951f3d11       coredns-7c65d6cfc9-kb66b
	18c5f51ab1087       60c005f310ff3                                                                                                  12 minutes ago       Running             kube-proxy                0                   7194d5f4f9bd4       kube-proxy-5rlh4
	e8760faf3b7a4       2e96e5913fc06                                                                                                  12 minutes ago       Running             etcd                      0                   71dad05f762ee       etcd-addons-535596
	73848aa492fb1       9aa1fad941575                                                                                                  12 minutes ago       Running             kube-scheduler            0                   4142862c4d69f       kube-scheduler-addons-535596
	36f37168d518d       175ffd71cce3d                                                                                                  12 minutes ago       Running             kube-controller-manager   0                   c45997c59f66b       kube-controller-manager-addons-535596
	4f595087e3bbc       6bab7719df100                                                                                                  12 minutes ago       Running             kube-apiserver            0                   1005763f8b077       kube-apiserver-addons-535596
	
	
	==> coredns [077cef7a7d94] <==
	[INFO] 10.244.0.6:59075 - 58545 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000104573s
	[INFO] 10.244.0.6:56966 - 29954 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000158919s
	[INFO] 10.244.0.6:56966 - 46080 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090742s
	[INFO] 10.244.0.6:38719 - 37629 "A IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005145561s
	[INFO] 10.244.0.6:38719 - 15352 "AAAA IN registry.kube-system.svc.cluster.local.us-west1-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.005073341s
	[INFO] 10.244.0.6:47259 - 1431 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004514084s
	[INFO] 10.244.0.6:47259 - 9114 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.00586847s
	[INFO] 10.244.0.6:57290 - 34300 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005003809s
	[INFO] 10.244.0.6:57290 - 20704 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004939693s
	[INFO] 10.244.0.6:59574 - 23535 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152126s
	[INFO] 10.244.0.6:59574 - 48115 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000220936s
	[INFO] 10.244.0.25:56050 - 60864 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000380037s
	[INFO] 10.244.0.25:48895 - 60991 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000478824s
	[INFO] 10.244.0.25:42200 - 44088 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000161896s
	[INFO] 10.244.0.25:58473 - 21950 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000174622s
	[INFO] 10.244.0.25:56668 - 32875 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124353s
	[INFO] 10.244.0.25:33313 - 50056 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000174659s
	[INFO] 10.244.0.25:43505 - 42476 "A IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.006393039s
	[INFO] 10.244.0.25:59450 - 23479 "AAAA IN storage.googleapis.com.us-west1-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.008076811s
	[INFO] 10.244.0.25:36841 - 43532 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006227195s
	[INFO] 10.244.0.25:56283 - 12028 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007916684s
	[INFO] 10.244.0.25:35300 - 62305 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004025277s
	[INFO] 10.244.0.25:52609 - 44043 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004288243s
	[INFO] 10.244.0.25:32969 - 42605 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.000736335s
	[INFO] 10.244.0.25:34075 - 63722 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000825414s
	
	
	==> describe nodes <==
	Name:               addons-535596
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-535596
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-535596
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T17_52_31_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-535596
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 17:52:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-535596
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 18:05:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 18:05:04 +0000   Fri, 20 Sep 2024 17:52:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 18:05:04 +0000   Fri, 20 Sep 2024 17:52:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 18:05:04 +0000   Fri, 20 Sep 2024 17:52:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 18:05:04 +0000   Fri, 20 Sep 2024 17:52:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-535596
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c895f3fe3584e05aabd226c923bd20f
	  System UUID:                f460337f-4a57-4b64-9549-3592911aa66d
	  Boot ID:                    e18ec7c8-f3b2-4a00-b841-fc395c9f5435
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  default                     hello-world-app-55bf9c44b4-764dj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  gcp-auth                    gcp-auth-89d5ffd79-mjk69                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-kb66b                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-535596                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-535596             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-535596    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-5rlh4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-535596             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-535596 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-535596 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-535596 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-535596 event: Registered Node addons-535596 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 02 86 07 be 40 0e 08 06
	[  +2.077618] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4e 60 23 e3 f8 eb 08 06
	[  +2.152722] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a 7c 1b 97 62 cf 08 06
	[  +5.642728] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 2a fc f1 42 53 61 08 06
	[  +0.133082] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 08 ab d3 d4 df 08 06
	[  +0.287123] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e 45 49 db fc 89 08 06
	[ +21.688679] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5e dc 12 10 70 c5 08 06
	[  +1.015773] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 3e 36 b7 78 7b c4 08 06
	[Sep20 17:54] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ba ed 17 98 5b c1 08 06
	[  +0.071047] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a eb 5f 97 eb 8d 08 06
	[Sep20 17:55] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4e 80 71 d7 ad b9 08 06
	[  +0.000601] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff ca 8d 80 f8 cb a2 08 06
	[Sep20 18:04] IPv4: martian source 10.244.0.29 from 10.244.0.21, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e dc 12 10 70 c5 08 06
	
	
	==> etcd [e8760faf3b7a] <==
	{"level":"info","ts":"2024-09-20T17:52:26.868406Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-20T17:52:26.956779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T17:52:26.956819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T17:52:26.956845Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-20T17:52:26.956875Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T17:52:26.956889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T17:52:26.956907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T17:52:26.956922Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T17:52:26.957719Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-535596 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T17:52:26.957709Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:52:26.957749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:52:26.957752Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T17:52:26.958043Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T17:52:26.958075Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T17:52:26.958275Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:52:26.958375Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:52:26.958401Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T17:52:26.958668Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:52:26.959005Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T17:52:26.959624Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T17:52:26.959808Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T17:52:42.333983Z","caller":"traceutil/trace.go:171","msg":"trace[222744135] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"100.909546ms","start":"2024-09-20T17:52:42.233056Z","end":"2024-09-20T17:52:42.333965Z","steps":["trace[222744135] 'process raft request'  (duration: 100.784697ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T18:02:27.555047Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1853}
	{"level":"info","ts":"2024-09-20T18:02:27.579491Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1853,"took":"23.896544ms","hash":3363923587,"current-db-size-bytes":9043968,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4939776,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-20T18:02:27.579541Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3363923587,"revision":1853,"compact-revision":-1}
	
	
	==> gcp-auth [312ef4918365] <==
	2024/09/20 17:56:02 Ready to write response ...
	2024/09/20 17:56:02 Ready to marshal response ...
	2024/09/20 17:56:02 Ready to write response ...
	2024/09/20 18:04:05 Ready to marshal response ...
	2024/09/20 18:04:05 Ready to write response ...
	2024/09/20 18:04:05 Ready to marshal response ...
	2024/09/20 18:04:05 Ready to write response ...
	2024/09/20 18:04:05 Ready to marshal response ...
	2024/09/20 18:04:05 Ready to write response ...
	2024/09/20 18:04:10 Ready to marshal response ...
	2024/09/20 18:04:10 Ready to write response ...
	2024/09/20 18:04:14 Ready to marshal response ...
	2024/09/20 18:04:14 Ready to write response ...
	2024/09/20 18:04:19 Ready to marshal response ...
	2024/09/20 18:04:19 Ready to write response ...
	2024/09/20 18:04:33 Ready to marshal response ...
	2024/09/20 18:04:33 Ready to write response ...
	2024/09/20 18:04:33 Ready to marshal response ...
	2024/09/20 18:04:33 Ready to write response ...
	2024/09/20 18:04:35 Ready to marshal response ...
	2024/09/20 18:04:35 Ready to write response ...
	2024/09/20 18:04:42 Ready to marshal response ...
	2024/09/20 18:04:42 Ready to write response ...
	2024/09/20 18:04:59 Ready to marshal response ...
	2024/09/20 18:04:59 Ready to write response ...
	
	
	==> kernel <==
	 18:05:16 up  1:47,  0 users,  load average: 0.21, 0.40, 1.25
	Linux addons-535596 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [4f595087e3bb] <==
	I0920 18:04:05.050284       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.102.130.34"}
	I0920 18:04:10.318622       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0920 18:04:10.486438       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.185.255"}
	I0920 18:04:16.271115       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0920 18:04:17.347896       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0920 18:04:19.964450       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.13.36"}
	I0920 18:04:43.333160       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0920 18:04:43.724810       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0920 18:04:43.730348       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0920 18:04:43.735552       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0920 18:04:58.737563       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0920 18:05:08.536406       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0920 18:05:14.321108       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:05:14.321164       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:05:14.333372       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:05:14.333422       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:05:14.333988       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:05:14.334045       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:05:14.344489       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:05:14.344534       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0920 18:05:14.360069       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0920 18:05:14.360112       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0920 18:05:15.334310       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0920 18:05:15.361063       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0920 18:05:15.365018       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [36f37168d518] <==
	W0920 18:04:50.924310       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:04:50.924354       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:04:52.049117       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:04:52.049166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:04:53.279911       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:04:53.279952       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:04:54.883393       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:04:54.883439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:04:59.616617       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:04:59.616669       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:05:01.827912       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:05:01.827960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:05:04.944768       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-535596"
	I0920 18:05:08.004957       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0920 18:05:08.050344       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0920 18:05:08.782388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-535596"
	W0920 18:05:10.663200       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:05:10.663246       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 18:05:14.375252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="10.672µs"
	I0920 18:05:15.201598       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="10.624µs"
	E0920 18:05:15.335805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0920 18:05:15.362626       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0920 18:05:15.366321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 18:05:16.379123       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 18:05:16.379169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [18c5f51ab108] <==
	I0920 17:52:39.335728       1 server_linux.go:66] "Using iptables proxy"
	I0920 17:52:40.045325       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 17:52:40.045399       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 17:52:40.442190       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 17:52:40.442262       1 server_linux.go:169] "Using iptables Proxier"
	I0920 17:52:40.554074       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 17:52:40.554474       1 server.go:483] "Version info" version="v1.31.1"
	I0920 17:52:40.554509       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 17:52:40.556393       1 config.go:199] "Starting service config controller"
	I0920 17:52:40.556425       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 17:52:40.556455       1 config.go:105] "Starting endpoint slice config controller"
	I0920 17:52:40.556460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 17:52:40.556994       1 config.go:328] "Starting node config controller"
	I0920 17:52:40.557003       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 17:52:40.731561       1 shared_informer.go:320] Caches are synced for node config
	I0920 17:52:40.731608       1 shared_informer.go:320] Caches are synced for service config
	I0920 17:52:40.731652       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [73848aa492fb] <==
	E0920 17:52:28.951418       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:28.951423       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 17:52:28.951440       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0920 17:52:28.951464       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:28.951575       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:52:28.951609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:29.867976       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 17:52:29.868026       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0920 17:52:29.874313       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 17:52:29.874355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:29.899726       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 17:52:29.899766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:29.931714       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 17:52:29.931759       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:29.981982       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 17:52:29.982023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:30.070627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 17:52:30.070675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:30.071065       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 17:52:30.071115       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:30.100074       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 17:52:30.100109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 17:52:30.136479       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 17:52:30.136525       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0920 17:52:32.050014       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.136106    2443 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cv8gf\" (UniqueName: \"kubernetes.io/projected/05bae9a3-f388-42c1-b91c-cc0a1a46f75b-kube-api-access-cv8gf\") on node \"addons-535596\" DevicePath \"\""
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.136119    2443 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/05bae9a3-f388-42c1-b91c-cc0a1a46f75b-gcp-creds\") on node \"addons-535596\" DevicePath \"\""
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.136128    2443 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cf72r\" (UniqueName: \"kubernetes.io/projected/dbe43e0b-fb2d-4702-a4c9-b57ef880c114-kube-api-access-cf72r\") on node \"addons-535596\" DevicePath \"\""
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.538937    2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfzf4\" (UniqueName: \"kubernetes.io/projected/0da9eb92-a72a-4e20-97a3-ff9fecea622f-kube-api-access-bfzf4\") pod \"0da9eb92-a72a-4e20-97a3-ff9fecea622f\" (UID: \"0da9eb92-a72a-4e20-97a3-ff9fecea622f\") "
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.538980    2443 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8kdt\" (UniqueName: \"kubernetes.io/projected/2d465593-59aa-4922-8d50-d95af40b4d34-kube-api-access-j8kdt\") pod \"2d465593-59aa-4922-8d50-d95af40b4d34\" (UID: \"2d465593-59aa-4922-8d50-d95af40b4d34\") "
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.540956    2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0da9eb92-a72a-4e20-97a3-ff9fecea622f-kube-api-access-bfzf4" (OuterVolumeSpecName: "kube-api-access-bfzf4") pod "0da9eb92-a72a-4e20-97a3-ff9fecea622f" (UID: "0da9eb92-a72a-4e20-97a3-ff9fecea622f"). InnerVolumeSpecName "kube-api-access-bfzf4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.541022    2443 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d465593-59aa-4922-8d50-d95af40b4d34-kube-api-access-j8kdt" (OuterVolumeSpecName: "kube-api-access-j8kdt") pod "2d465593-59aa-4922-8d50-d95af40b4d34" (UID: "2d465593-59aa-4922-8d50-d95af40b4d34"). InnerVolumeSpecName "kube-api-access-j8kdt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.553665    2443 scope.go:117] "RemoveContainer" containerID="623f3ce1dca201d48289eab0e7176e760aa1fb6aac7e4a2b6fa2634d542629dc"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.571157    2443 scope.go:117] "RemoveContainer" containerID="623f3ce1dca201d48289eab0e7176e760aa1fb6aac7e4a2b6fa2634d542629dc"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: E0920 18:05:15.573147    2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 623f3ce1dca201d48289eab0e7176e760aa1fb6aac7e4a2b6fa2634d542629dc" containerID="623f3ce1dca201d48289eab0e7176e760aa1fb6aac7e4a2b6fa2634d542629dc"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.573197    2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"623f3ce1dca201d48289eab0e7176e760aa1fb6aac7e4a2b6fa2634d542629dc"} err="failed to get container status \"623f3ce1dca201d48289eab0e7176e760aa1fb6aac7e4a2b6fa2634d542629dc\": rpc error: code = Unknown desc = Error response from daemon: No such container: 623f3ce1dca201d48289eab0e7176e760aa1fb6aac7e4a2b6fa2634d542629dc"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.573225    2443 scope.go:117] "RemoveContainer" containerID="a7e9fd10921f63ea91bd53cf3a86edb41530654b3ccdd6e7920705066edad9c5"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.636812    2443 scope.go:117] "RemoveContainer" containerID="a7e9fd10921f63ea91bd53cf3a86edb41530654b3ccdd6e7920705066edad9c5"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: E0920 18:05:15.637768    2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a7e9fd10921f63ea91bd53cf3a86edb41530654b3ccdd6e7920705066edad9c5" containerID="a7e9fd10921f63ea91bd53cf3a86edb41530654b3ccdd6e7920705066edad9c5"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.637812    2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a7e9fd10921f63ea91bd53cf3a86edb41530654b3ccdd6e7920705066edad9c5"} err="failed to get container status \"a7e9fd10921f63ea91bd53cf3a86edb41530654b3ccdd6e7920705066edad9c5\": rpc error: code = Unknown desc = Error response from daemon: No such container: a7e9fd10921f63ea91bd53cf3a86edb41530654b3ccdd6e7920705066edad9c5"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.637833    2443 scope.go:117] "RemoveContainer" containerID="b773606b289431230ad010f261dc9050e6d110ad5104ca1beb3990cb38e5a8e8"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.639356    2443 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j8kdt\" (UniqueName: \"kubernetes.io/projected/2d465593-59aa-4922-8d50-d95af40b4d34-kube-api-access-j8kdt\") on node \"addons-535596\" DevicePath \"\""
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.639390    2443 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bfzf4\" (UniqueName: \"kubernetes.io/projected/0da9eb92-a72a-4e20-97a3-ff9fecea622f-kube-api-access-bfzf4\") on node \"addons-535596\" DevicePath \"\""
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.655563    2443 scope.go:117] "RemoveContainer" containerID="b773606b289431230ad010f261dc9050e6d110ad5104ca1beb3990cb38e5a8e8"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: E0920 18:05:15.656349    2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: b773606b289431230ad010f261dc9050e6d110ad5104ca1beb3990cb38e5a8e8" containerID="b773606b289431230ad010f261dc9050e6d110ad5104ca1beb3990cb38e5a8e8"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.656392    2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"b773606b289431230ad010f261dc9050e6d110ad5104ca1beb3990cb38e5a8e8"} err="failed to get container status \"b773606b289431230ad010f261dc9050e6d110ad5104ca1beb3990cb38e5a8e8\": rpc error: code = Unknown desc = Error response from daemon: No such container: b773606b289431230ad010f261dc9050e6d110ad5104ca1beb3990cb38e5a8e8"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.656420    2443 scope.go:117] "RemoveContainer" containerID="8c6fdf1abfea23913e37fcc75a263c9df8118fb74ef5ecb35648cd4d054aad1a"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.681893    2443 scope.go:117] "RemoveContainer" containerID="8c6fdf1abfea23913e37fcc75a263c9df8118fb74ef5ecb35648cd4d054aad1a"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: E0920 18:05:15.682824    2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8c6fdf1abfea23913e37fcc75a263c9df8118fb74ef5ecb35648cd4d054aad1a" containerID="8c6fdf1abfea23913e37fcc75a263c9df8118fb74ef5ecb35648cd4d054aad1a"
	Sep 20 18:05:15 addons-535596 kubelet[2443]: I0920 18:05:15.682863    2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8c6fdf1abfea23913e37fcc75a263c9df8118fb74ef5ecb35648cd4d054aad1a"} err="failed to get container status \"8c6fdf1abfea23913e37fcc75a263c9df8118fb74ef5ecb35648cd4d054aad1a\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8c6fdf1abfea23913e37fcc75a263c9df8118fb74ef5ecb35648cd4d054aad1a"
	
	
	==> storage-provisioner [047dca7997b2] <==
	I0920 17:52:44.832212       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 17:52:44.846026       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 17:52:44.846107       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 17:52:44.930946       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 17:52:44.931239       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-535596_31c30f7f-04d5-4127-a2b6-2aa0af72ccd0!
	I0920 17:52:44.931607       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d4f7b27-960c-4098-b355-4472d96b4acf", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-535596_31c30f7f-04d5-4127-a2b6-2aa0af72ccd0 became leader
	I0920 17:52:45.033504       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-535596_31c30f7f-04d5-4127-a2b6-2aa0af72ccd0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-535596 -n addons-535596
helpers_test.go:261: (dbg) Run:  kubectl --context addons-535596 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-535596 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-535596 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-535596/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 17:56:02 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9m6vt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-9m6vt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to addons-535596
	  Normal   Pulling    7m53s (x4 over 9m14s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m53s (x4 over 9m14s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m53s (x4 over 9m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m29s (x6 over 9m13s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m2s (x21 over 9m13s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (72.53s)

                                                
                                    

Test pass (321/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.22
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 4.76
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.18
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.95
21 TestBinaryMirror 0.7
22 TestOffline 74.91
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 211.6
29 TestAddons/serial/Volcano 39.72
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 18.99
35 TestAddons/parallel/InspektorGadget 11.62
36 TestAddons/parallel/MetricsServer 5.62
38 TestAddons/parallel/CSI 53.78
39 TestAddons/parallel/Headlamp 16.54
40 TestAddons/parallel/CloudSpanner 5.42
41 TestAddons/parallel/LocalPath 52.65
42 TestAddons/parallel/NvidiaDevicePlugin 5.41
43 TestAddons/parallel/Yakd 11.55
44 TestAddons/StoppedEnableDisable 10.97
45 TestCertOptions 30.3
46 TestCertExpiration 228.43
47 TestDockerFlags 27.29
48 TestForceSystemdFlag 26.64
49 TestForceSystemdEnv 31.16
51 TestKVMDriverInstallOrUpdate 4.47
55 TestErrorSpam/setup 20.83
56 TestErrorSpam/start 0.53
57 TestErrorSpam/status 0.83
58 TestErrorSpam/pause 1.11
59 TestErrorSpam/unpause 1.3
60 TestErrorSpam/stop 10.81
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 68.56
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 34.86
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.22
72 TestFunctional/serial/CacheCmd/cache/add_local 1.25
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.23
77 TestFunctional/serial/CacheCmd/cache/delete 0.1
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 39.48
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 0.91
83 TestFunctional/serial/LogsFileCmd 0.93
84 TestFunctional/serial/InvalidService 4.67
86 TestFunctional/parallel/ConfigCmd 0.36
87 TestFunctional/parallel/DashboardCmd 15.39
88 TestFunctional/parallel/DryRun 0.38
89 TestFunctional/parallel/InternationalLanguage 0.15
90 TestFunctional/parallel/StatusCmd 0.97
94 TestFunctional/parallel/ServiceCmdConnect 10.59
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 29.06
98 TestFunctional/parallel/SSHCmd 0.62
99 TestFunctional/parallel/CpCmd 1.85
100 TestFunctional/parallel/MySQL 21.72
101 TestFunctional/parallel/FileSync 0.29
102 TestFunctional/parallel/CertSync 1.66
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
110 TestFunctional/parallel/License 0.18
111 TestFunctional/parallel/ServiceCmd/DeployApp 9.23
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.34
117 TestFunctional/parallel/ServiceCmd/List 0.53
118 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
119 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
120 TestFunctional/parallel/ServiceCmd/Format 0.43
121 TestFunctional/parallel/ServiceCmd/URL 0.46
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
129 TestFunctional/parallel/ProfileCmd/profile_list 0.46
130 TestFunctional/parallel/MountCmd/any-port 6.52
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
132 TestFunctional/parallel/Version/short 0.05
133 TestFunctional/parallel/Version/components 0.55
134 TestFunctional/parallel/DockerEnv/bash 0.88
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.73
143 TestFunctional/parallel/ImageCommands/Setup 1.58
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
146 TestFunctional/parallel/MountCmd/specific-port 1.69
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
148 TestFunctional/parallel/MountCmd/VerifyCleanup 1.98
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.59
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.69
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.57
153 TestFunctional/delete_echo-server_images 0.03
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 96.97
160 TestMultiControlPlane/serial/DeployApp 4.73
161 TestMultiControlPlane/serial/PingHostFromPods 1.02
162 TestMultiControlPlane/serial/AddWorkerNode 23.31
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.81
165 TestMultiControlPlane/serial/CopyFile 15.34
166 TestMultiControlPlane/serial/StopSecondaryNode 11.32
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
168 TestMultiControlPlane/serial/RestartSecondaryNode 34.53
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 230.07
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.16
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.63
173 TestMultiControlPlane/serial/StopCluster 32.2
174 TestMultiControlPlane/serial/RestartCluster 65.85
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 38.34
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
180 TestImageBuild/serial/Setup 20.35
181 TestImageBuild/serial/NormalBuild 1.6
182 TestImageBuild/serial/BuildWithBuildArg 0.94
183 TestImageBuild/serial/BuildWithDockerIgnore 0.72
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.73
188 TestJSONOutput/start/Command 65.74
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.48
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.45
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 5.71
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.19
213 TestKicCustomNetwork/create_custom_network 25.34
214 TestKicCustomNetwork/use_default_bridge_network 22.55
215 TestKicExistingNetwork 22.55
216 TestKicCustomSubnet 22.62
217 TestKicStaticIP 22.66
218 TestMainNoArgs 0.04
219 TestMinikubeProfile 51.28
222 TestMountStart/serial/StartWithMountFirst 6.62
223 TestMountStart/serial/VerifyMountFirst 0.23
224 TestMountStart/serial/StartWithMountSecond 9.56
225 TestMountStart/serial/VerifyMountSecond 0.23
226 TestMountStart/serial/DeleteFirst 1.44
227 TestMountStart/serial/VerifyMountPostDelete 0.24
228 TestMountStart/serial/Stop 1.16
229 TestMountStart/serial/RestartStopped 7.87
230 TestMountStart/serial/VerifyMountPostStop 0.23
233 TestMultiNode/serial/FreshStart2Nodes 67.92
234 TestMultiNode/serial/DeployApp2Nodes 36.48
235 TestMultiNode/serial/PingHostFrom2Pods 0.69
236 TestMultiNode/serial/AddNode 15.17
237 TestMultiNode/serial/MultiNodeLabels 0.07
238 TestMultiNode/serial/ProfileList 0.63
239 TestMultiNode/serial/CopyFile 8.62
240 TestMultiNode/serial/StopNode 2.04
241 TestMultiNode/serial/StartAfterStop 9.55
242 TestMultiNode/serial/RestartKeepsNodes 109.28
243 TestMultiNode/serial/DeleteNode 5.08
244 TestMultiNode/serial/StopMultiNode 21.37
245 TestMultiNode/serial/RestartMultiNode 51.72
246 TestMultiNode/serial/ValidateNameConflict 23.14
251 TestPreload 122.21
253 TestScheduledStopUnix 93.6
254 TestSkaffold 98.85
256 TestInsufficientStorage 9.42
257 TestRunningBinaryUpgrade 79.93
259 TestKubernetesUpgrade 323.67
260 TestMissingContainerUpgrade 105.03
261 TestStoppedBinaryUpgrade/Setup 0.6
263 TestPause/serial/Start 76.86
264 TestStoppedBinaryUpgrade/Upgrade 119.79
265 TestPause/serial/SecondStartNoReconfiguration 30.69
266 TestPause/serial/Pause 0.62
267 TestPause/serial/VerifyStatus 0.43
268 TestPause/serial/Unpause 0.64
269 TestPause/serial/PauseAgain 0.66
270 TestPause/serial/DeletePaused 2.13
271 TestPause/serial/VerifyDeletedResources 0.76
272 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
282 TestNoKubernetes/serial/StartWithK8s 25.22
295 TestStartStop/group/old-k8s-version/serial/FirstStart 127.29
296 TestNoKubernetes/serial/StartWithStopK8s 16.61
297 TestNoKubernetes/serial/Start 6.03
298 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
299 TestNoKubernetes/serial/ProfileList 43.08
300 TestNoKubernetes/serial/Stop 1.22
301 TestNoKubernetes/serial/StartNoArgs 8.06
303 TestStartStop/group/no-preload/serial/FirstStart 44.39
304 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
306 TestStartStop/group/embed-certs/serial/FirstStart 67.83
307 TestStartStop/group/no-preload/serial/DeployApp 7.24
308 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.78
310 TestStartStop/group/no-preload/serial/Stop 10.73
311 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.67
312 TestStartStop/group/old-k8s-version/serial/Stop 10.82
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
314 TestStartStop/group/no-preload/serial/SecondStart 300.27
315 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
316 TestStartStop/group/old-k8s-version/serial/SecondStart 125.31
317 TestStartStop/group/embed-certs/serial/DeployApp 11.37
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.95
319 TestStartStop/group/embed-certs/serial/Stop 10.9
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/embed-certs/serial/SecondStart 262.58
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.17
324 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
326 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.29
327 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
328 TestStartStop/group/old-k8s-version/serial/Pause 2.31
330 TestStartStop/group/newest-cni/serial/FirstStart 28.76
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.85
332 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.79
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
334 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.74
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
337 TestStartStop/group/newest-cni/serial/Stop 9.79
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
339 TestStartStop/group/newest-cni/serial/SecondStart 14.95
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
343 TestStartStop/group/newest-cni/serial/Pause 2.36
344 TestNetworkPlugins/group/auto/Start 66.48
345 TestNetworkPlugins/group/auto/KubeletFlags 0.25
346 TestNetworkPlugins/group/auto/NetCatPod 9.17
347 TestNetworkPlugins/group/auto/DNS 0.14
348 TestNetworkPlugins/group/auto/Localhost 0.11
349 TestNetworkPlugins/group/auto/HairPin 0.11
350 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
351 TestNetworkPlugins/group/flannel/Start 39.67
352 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
353 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
354 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.2
355 TestStartStop/group/embed-certs/serial/Pause 2.32
356 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
357 TestNetworkPlugins/group/enable-default-cni/Start 67.8
358 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
359 TestStartStop/group/no-preload/serial/Pause 2.49
360 TestNetworkPlugins/group/bridge/Start 67.57
361 TestNetworkPlugins/group/flannel/ControllerPod 6.01
362 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
363 TestNetworkPlugins/group/flannel/NetCatPod 9.18
364 TestNetworkPlugins/group/flannel/DNS 0.13
365 TestNetworkPlugins/group/flannel/Localhost 0.11
366 TestNetworkPlugins/group/flannel/HairPin 0.1
367 TestNetworkPlugins/group/kubenet/Start 42.2
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.19
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
371 TestNetworkPlugins/group/bridge/NetCatPod 14.19
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
375 TestNetworkPlugins/group/bridge/DNS 0.16
376 TestNetworkPlugins/group/bridge/Localhost 0.13
377 TestNetworkPlugins/group/bridge/HairPin 0.12
378 TestNetworkPlugins/group/calico/Start 67.96
379 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
380 TestNetworkPlugins/group/kubenet/NetCatPod 10.21
381 TestNetworkPlugins/group/kindnet/Start 62.49
382 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
383 TestNetworkPlugins/group/kubenet/DNS 0.13
384 TestNetworkPlugins/group/kubenet/Localhost 0.12
385 TestNetworkPlugins/group/kubenet/HairPin 0.1
386 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
387 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
388 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.79
389 TestNetworkPlugins/group/custom-flannel/Start 50.16
390 TestNetworkPlugins/group/false/Start 65.25
391 TestNetworkPlugins/group/calico/ControllerPod 6.01
392 TestNetworkPlugins/group/calico/KubeletFlags 0.29
393 TestNetworkPlugins/group/calico/NetCatPod 10.21
394 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
395 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
396 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
397 TestNetworkPlugins/group/kindnet/NetCatPod 10.18
398 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
399 TestNetworkPlugins/group/calico/DNS 0.12
400 TestNetworkPlugins/group/calico/Localhost 0.18
401 TestNetworkPlugins/group/calico/HairPin 0.13
402 TestNetworkPlugins/group/kindnet/DNS 0.13
403 TestNetworkPlugins/group/custom-flannel/DNS 0.16
404 TestNetworkPlugins/group/kindnet/Localhost 0.14
405 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
406 TestNetworkPlugins/group/kindnet/HairPin 0.13
407 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
408 TestNetworkPlugins/group/false/KubeletFlags 0.3
409 TestNetworkPlugins/group/false/NetCatPod 10.2
410 TestNetworkPlugins/group/false/DNS 0.12
411 TestNetworkPlugins/group/false/Localhost 0.1
412 TestNetworkPlugins/group/false/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (8.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-718518 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-718518 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (8.219320674s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 17:51:43.307516   87188 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 17:51:43.307651   87188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-80428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-718518
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-718518: exit status 85 (56.095493ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-718518 | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC |          |
	|         | -p download-only-718518        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:51:35
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:51:35.124884   87200 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:51:35.124999   87200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:51:35.125008   87200 out.go:358] Setting ErrFile to fd 2...
	I0920 17:51:35.125012   87200 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:51:35.125184   87200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
	W0920 17:51:35.125310   87200 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19678-80428/.minikube/config/config.json: open /home/jenkins/minikube-integration/19678-80428/.minikube/config/config.json: no such file or directory
	I0920 17:51:35.125859   87200 out.go:352] Setting JSON to true
	I0920 17:51:35.126674   87200 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5647,"bootTime":1726849048,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:51:35.126769   87200 start.go:139] virtualization: kvm guest
	I0920 17:51:35.129222   87200 out.go:97] [download-only-718518] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:51:35.129324   87200 notify.go:220] Checking for updates...
	W0920 17:51:35.129334   87200 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-80428/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 17:51:35.130493   87200 out.go:169] MINIKUBE_LOCATION=19678
	I0920 17:51:35.131757   87200 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:51:35.132899   87200 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	I0920 17:51:35.134122   87200 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	I0920 17:51:35.135365   87200 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 17:51:35.137458   87200 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 17:51:35.137640   87200 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:51:35.158157   87200 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:51:35.158225   87200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:51:35.531027   87200 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 17:51:35.521814557 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:51:35.531231   87200 docker.go:318] overlay module found
	I0920 17:51:35.532791   87200 out.go:97] Using the docker driver based on user configuration
	I0920 17:51:35.532819   87200 start.go:297] selected driver: docker
	I0920 17:51:35.532827   87200 start.go:901] validating driver "docker" against <nil>
	I0920 17:51:35.532951   87200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:51:35.578395   87200 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 17:51:35.569679864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:51:35.578594   87200 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:51:35.579122   87200 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0920 17:51:35.579276   87200 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:51:35.580846   87200 out.go:169] Using Docker driver with root privileges
	I0920 17:51:35.581834   87200 cni.go:84] Creating CNI manager for ""
	I0920 17:51:35.581896   87200 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 17:51:35.581962   87200 start.go:340] cluster config:
	{Name:download-only-718518 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-718518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 17:51:35.583026   87200 out.go:97] Starting "download-only-718518" primary control-plane node in "download-only-718518" cluster
	I0920 17:51:35.583058   87200 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 17:51:35.584011   87200 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 17:51:35.584031   87200 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 17:51:35.584147   87200 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 17:51:35.599334   87200 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 17:51:35.599503   87200 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 17:51:35.599584   87200 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 17:51:35.604041   87200 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 17:51:35.604066   87200 cache.go:56] Caching tarball of preloaded images
	I0920 17:51:35.604204   87200 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 17:51:35.605838   87200 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 17:51:35.605867   87200 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 17:51:35.631611   87200 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19678-80428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 17:51:38.271747   87200 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 17:51:38.271868   87200 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19678-80428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 17:51:39.068316   87200 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 17:51:39.068652   87200 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/download-only-718518/config.json ...
	I0920 17:51:39.068698   87200 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/download-only-718518/config.json: {Name:mkd53f0835d1ceeea909795ffeda2c6053f59bc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 17:51:39.068865   87200 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 17:51:39.069023   87200 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19678-80428/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-718518 host does not exist
	  To start a cluster, run: "minikube start -p download-only-718518"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-718518
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-289512 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-289512 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.762485052s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 17:51:48.430195   87188 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 17:51:48.430242   87188 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-80428/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-289512
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-289512: exit status 85 (55.086382ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-718518 | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC |                     |
	|         | -p download-only-718518        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC | 20 Sep 24 17:51 UTC |
	| delete  | -p download-only-718518        | download-only-718518 | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC | 20 Sep 24 17:51 UTC |
	| start   | -o=json --download-only        | download-only-289512 | jenkins | v1.34.0 | 20 Sep 24 17:51 UTC |                     |
	|         | -p download-only-289512        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 17:51:43
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 17:51:43.704249   87561 out.go:345] Setting OutFile to fd 1 ...
	I0920 17:51:43.704355   87561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:51:43.704364   87561 out.go:358] Setting ErrFile to fd 2...
	I0920 17:51:43.704368   87561 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 17:51:43.704533   87561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
	I0920 17:51:43.705042   87561 out.go:352] Setting JSON to true
	I0920 17:51:43.705867   87561 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":5656,"bootTime":1726849048,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 17:51:43.705963   87561 start.go:139] virtualization: kvm guest
	I0920 17:51:43.708020   87561 out.go:97] [download-only-289512] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 17:51:43.708190   87561 notify.go:220] Checking for updates...
	I0920 17:51:43.709573   87561 out.go:169] MINIKUBE_LOCATION=19678
	I0920 17:51:43.710867   87561 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 17:51:43.712313   87561 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	I0920 17:51:43.713590   87561 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	I0920 17:51:43.714815   87561 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0920 17:51:43.716933   87561 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 17:51:43.717149   87561 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 17:51:43.739320   87561 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 17:51:43.739437   87561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:51:43.782776   87561 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-20 17:51:43.774450794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:51:43.782885   87561 docker.go:318] overlay module found
	I0920 17:51:43.784528   87561 out.go:97] Using the docker driver based on user configuration
	I0920 17:51:43.784559   87561 start.go:297] selected driver: docker
	I0920 17:51:43.784568   87561 start.go:901] validating driver "docker" against <nil>
	I0920 17:51:43.784649   87561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 17:51:43.829927   87561 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-09-20 17:51:43.820784372 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 17:51:43.830118   87561 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 17:51:43.830738   87561 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0920 17:51:43.830944   87561 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 17:51:43.832747   87561 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-289512 host does not exist
	  To start a cluster, run: "minikube start -p download-only-289512"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-289512
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.95s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-952527 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-952527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-952527
--- PASS: TestDownloadOnlyKic (0.95s)

                                                
                                    
x
+
TestBinaryMirror (0.7s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 17:51:49.970964   87188 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-440140 --alsologtostderr --binary-mirror http://127.0.0.1:39251 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-440140" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-440140
--- PASS: TestBinaryMirror (0.70s)

                                                
                                    
x
+
TestOffline (74.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-396160 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-396160 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m12.847600123s)
helpers_test.go:175: Cleaning up "offline-docker-396160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-396160
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-396160: (2.065574853s)
--- PASS: TestOffline (74.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-535596
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-535596: exit status 85 (48.840465ms)

                                                
                                                
-- stdout --
	* Profile "addons-535596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-535596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-535596
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-535596: exit status 85 (48.059475ms)

                                                
                                                
-- stdout --
	* Profile "addons-535596" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-535596"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (211.6s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-535596 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-535596 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m31.595563088s)
--- PASS: TestAddons/Setup (211.60s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.72s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:843: volcano-admission stabilized in 9.413496ms
addons_test.go:835: volcano-scheduler stabilized in 9.445548ms
addons_test.go:851: volcano-controller stabilized in 9.471743ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-v2rb7" [2c890344-65dd-4f25-8921-8f2e8627b8dd] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00344017s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-kc4mn" [f9cae3fc-fd3a-445d-b7bc-ecb0a54f6580] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003844676s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-n8rjs" [09ec21d2-18ed-4319-9bb4-a0a324e7a2c3] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004293195s
addons_test.go:870: (dbg) Run:  kubectl --context addons-535596 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-535596 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-535596 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [09df6d2a-a765-48b5-bdb4-8be5e7bacffb] Pending
helpers_test.go:344: "test-job-nginx-0" [09df6d2a-a765-48b5-bdb4-8be5e7bacffb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [09df6d2a-a765-48b5-bdb4-8be5e7bacffb] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003281364s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-535596 addons disable volcano --alsologtostderr -v=1: (10.391148012s)
--- PASS: TestAddons/serial/Volcano (39.72s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-535596 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-535596 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-535596 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-535596 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-535596 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a3728423-e29e-4fbc-aa32-c920dd000968] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a3728423-e29e-4fbc-aa32-c920dd000968] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003653545s
I0920 18:04:19.497601   87188 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-535596 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-535596 addons disable ingress-dns --alsologtostderr -v=1: (1.294899239s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-535596 addons disable ingress --alsologtostderr -v=1: (7.55582215s)
--- PASS: TestAddons/parallel/Ingress (18.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.62s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-xsl7r" [a0e87d01-d2bd-4d8d-a75f-403cda25fec2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003682707s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-535596
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-535596: (5.610770728s)
--- PASS: TestAddons/parallel/InspektorGadget (11.62s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.224656ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-gtp88" [a1ca81cf-49a8-4226-814b-5471bd80feb6] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002381547s
addons_test.go:413: (dbg) Run:  kubectl --context addons-535596 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 18:04:20.976055   87188 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 18:04:20.980194   87188 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 18:04:20.980213   87188 kapi.go:107] duration metric: took 4.163749ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.169958ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-535596 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-535596 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [12699931-c6d2-47c4-ae0c-927f2dcbc645] Pending
helpers_test.go:344: "task-pv-pod" [12699931-c6d2-47c4-ae0c-927f2dcbc645] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [12699931-c6d2-47c4-ae0c-927f2dcbc645] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003666202s
addons_test.go:528: (dbg) Run:  kubectl --context addons-535596 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-535596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-535596 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-535596 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-535596 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-535596 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-535596 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [328d05a1-54b5-47a7-bba4-0e19a4a33757] Pending
helpers_test.go:344: "task-pv-pod-restore" [328d05a1-54b5-47a7-bba4-0e19a4a33757] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [328d05a1-54b5-47a7-bba4-0e19a4a33757] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004218583s
addons_test.go:570: (dbg) Run:  kubectl --context addons-535596 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-535596 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-535596 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-535596 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.42129231s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (53.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-535596 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-x9gfk" [13639492-9e9c-4752-845d-665dc7b44f37] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-x9gfk" [13639492-9e9c-4752-845d-665dc7b44f37] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-x9gfk" [13639492-9e9c-4752-845d-665dc7b44f37] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003988018s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-535596 addons disable headlamp --alsologtostderr -v=1: (5.880206479s)
--- PASS: TestAddons/parallel/Headlamp (16.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-sxxv2" [df3872f9-42c9-432d-a79e-53abab967c64] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003482974s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-535596
--- PASS: TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-535596 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-535596 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-535596 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [4831f68b-3533-41bb-8e27-9d7a7aa890c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [4831f68b-3533-41bb-8e27-9d7a7aa890c9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [4831f68b-3533-41bb-8e27-9d7a7aa890c9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002723734s
addons_test.go:938: (dbg) Run:  kubectl --context addons-535596 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 ssh "cat /opt/local-path-provisioner/pvc-1a02ad53-06eb-4e3d-819b-4c8d67dfc852_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-535596 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-535596 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-535596 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.854912618s)
--- PASS: TestAddons/parallel/LocalPath (52.65s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2qwl9" [8548f3fc-ad81-478f-ad1a-1f23a856925c] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003643168s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-535596
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.41s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-vn8cb" [e1e140d8-7c0f-45c8-8bd6-c11e9a44c731] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003666209s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-535596 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-535596 addons disable yakd --alsologtostderr -v=1: (5.540938238s)
--- PASS: TestAddons/parallel/Yakd (11.55s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (10.97s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-535596
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-535596: (10.739187088s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-535596
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-535596
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-535596
--- PASS: TestAddons/StoppedEnableDisable (10.97s)

                                                
                                    
x
+
TestCertOptions (30.3s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-545241 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-545241 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (25.980685455s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-545241 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-545241 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-545241 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-545241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-545241
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-545241: (3.689645447s)
--- PASS: TestCertOptions (30.30s)

                                                
                                    
x
+
TestCertExpiration (228.43s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-836313 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-836313 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (25.852255104s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-836313 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0920 18:40:11.959734   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:17.081509   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:22.319756   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-836313 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.509581537s)
helpers_test.go:175: Cleaning up "cert-expiration-836313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-836313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-836313: (2.069452256s)
--- PASS: TestCertExpiration (228.43s)

                                                
                                    
x
+
TestDockerFlags (27.29s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-574876 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-574876 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.713824197s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-574876 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-574876 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-574876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-574876
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-574876: (2.027895683s)
--- PASS: TestDockerFlags (27.29s)

                                                
                                    
x
+
TestForceSystemdFlag (26.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-641418 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-641418 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (22.900163948s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-641418 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-641418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-641418
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-641418: (3.419852569s)
--- PASS: TestForceSystemdFlag (26.64s)

                                                
                                    
x
+
TestForceSystemdEnv (31.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-680156 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-680156 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (26.841080776s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-680156 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-680156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-680156
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-680156: (3.941062934s)
--- PASS: TestForceSystemdEnv (31.16s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.47s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.47s)

                                                
                                    
x
+
TestErrorSpam/setup (20.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-795746 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-795746 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-795746 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-795746 --driver=docker  --container-runtime=docker: (20.829742701s)
--- PASS: TestErrorSpam/setup (20.83s)

                                                
                                    
x
+
TestErrorSpam/start (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 start --dry-run
--- PASS: TestErrorSpam/start (0.53s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 pause
--- PASS: TestErrorSpam/pause (1.11s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.3s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 unpause
--- PASS: TestErrorSpam/unpause (1.30s)

                                                
                                    
x
+
TestErrorSpam/stop (10.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 stop: (10.644246802s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-795746 --log_dir /tmp/nospam-795746 stop
--- PASS: TestErrorSpam/stop (10.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19678-80428/.minikube/files/etc/test/nested/copy/87188/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.56s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-831303 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-831303 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m8.560271642s)
--- PASS: TestFunctional/serial/StartWithProxy (68.56s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 18:07:24.818264   87188 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-831303 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-831303 --alsologtostderr -v=8: (34.857305148s)
functional_test.go:663: soft start took 34.858095882s for "functional-831303" cluster.
I0920 18:07:59.675975   87188 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (34.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-831303 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-831303 /tmp/TestFunctionalserialCacheCmdcacheadd_local3967767992/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 cache add minikube-local-cache-test:functional-831303
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 cache delete minikube-local-cache-test:functional-831303
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-831303
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-831303 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (253.194718ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 kubectl -- --context functional-831303 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-831303 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-831303 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-831303 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.483324278s)
functional_test.go:761: restart took 39.483463597s for "functional-831303" cluster.
I0920 18:08:44.628994   87188 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-831303 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 logs
--- PASS: TestFunctional/serial/LogsCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 logs --file /tmp/TestFunctionalserialLogsFileCmd478379405/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-831303 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-831303
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-831303: exit status 115 (309.351822ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30780 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-831303 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-831303 delete -f testdata/invalidsvc.yaml: (1.193949687s)
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-831303 config get cpus: exit status 14 (73.562674ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-831303 config get cpus: exit status 14 (51.516303ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-831303 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-831303 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 139534: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.39s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-831303 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-831303 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (178.36736ms)

                                                
                                                
-- stdout --
	* [functional-831303] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:09:03.856296  138233 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:09:03.856542  138233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:03.856552  138233 out.go:358] Setting ErrFile to fd 2...
	I0920 18:09:03.856557  138233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:03.856801  138233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
	I0920 18:09:03.857341  138233 out.go:352] Setting JSON to false
	I0920 18:09:03.858481  138233 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6696,"bootTime":1726849048,"procs":368,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:09:03.858596  138233 start.go:139] virtualization: kvm guest
	I0920 18:09:03.860861  138233 out.go:177] * [functional-831303] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0920 18:09:03.862233  138233 notify.go:220] Checking for updates...
	I0920 18:09:03.862257  138233 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:09:03.864075  138233 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:09:03.866597  138233 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	I0920 18:09:03.868315  138233 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	I0920 18:09:03.869665  138233 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:09:03.870988  138233 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:09:03.872723  138233 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:09:03.873402  138233 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:09:03.896757  138233 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 18:09:03.896832  138233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:09:03.944886  138233 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 18:09:03.93378795 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:09:03.945072  138233 docker.go:318] overlay module found
	I0920 18:09:03.946856  138233 out.go:177] * Using the docker driver based on existing profile
	I0920 18:09:03.947984  138233 start.go:297] selected driver: docker
	I0920 18:09:03.947997  138233 start.go:901] validating driver "docker" against &{Name:functional-831303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-831303 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:09:03.948099  138233 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:09:03.950214  138233 out.go:201] 
	W0920 18:09:03.951469  138233 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 18:09:03.952598  138233 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-831303 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-831303 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-831303 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (146.982975ms)

                                                
                                                
-- stdout --
	* [functional-831303] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:09:04.130752  138578 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:09:04.130870  138578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:04.130878  138578 out.go:358] Setting ErrFile to fd 2...
	I0920 18:09:04.130883  138578 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:09:04.131153  138578 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
	I0920 18:09:04.131711  138578 out.go:352] Setting JSON to false
	I0920 18:09:04.132829  138578 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":6696,"bootTime":1726849048,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0920 18:09:04.132950  138578 start.go:139] virtualization: kvm guest
	I0920 18:09:04.135023  138578 out.go:177] * [functional-831303] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0920 18:09:04.136477  138578 notify.go:220] Checking for updates...
	I0920 18:09:04.136498  138578 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 18:09:04.137800  138578 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 18:09:04.139035  138578 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	I0920 18:09:04.140151  138578 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	I0920 18:09:04.141247  138578 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0920 18:09:04.142392  138578 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 18:09:04.144012  138578 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:09:04.144520  138578 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 18:09:04.169515  138578 docker.go:123] docker version: linux-27.3.0:Docker Engine - Community
	I0920 18:09:04.169614  138578 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:09:04.220591  138578 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 18:09:04.210393435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:09:04.220697  138578 docker.go:318] overlay module found
	I0920 18:09:04.222638  138578 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 18:09:04.223715  138578 start.go:297] selected driver: docker
	I0920 18:09:04.223728  138578 start.go:901] validating driver "docker" against &{Name:functional-831303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-831303 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 18:09:04.223808  138578 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 18:09:04.225687  138578 out.go:201] 
	W0920 18:09:04.226893  138578 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 18:09:04.227964  138578 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-831303 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-831303 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-86d9k" [bd9ac42a-3052-4eb0-899f-02e0ec77ad6d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-86d9k" [bd9ac42a-3052-4eb0-899f-02e0ec77ad6d] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.025310835s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32263
functional_test.go:1675: http://192.168.49.2:32263: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-86d9k

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32263
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0db3288a-7499-4907-aa8b-170ececff9fb] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003106823s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-831303 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-831303 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-831303 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-831303 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [81fdd765-3f16-4b2f-b39b-546e18a5152b] Pending
helpers_test.go:344: "sp-pod" [81fdd765-3f16-4b2f-b39b-546e18a5152b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [81fdd765-3f16-4b2f-b39b-546e18a5152b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004912861s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-831303 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-831303 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-831303 delete -f testdata/storage-provisioner/pod.yaml: (1.240680279s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-831303 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [595929ce-533b-4e02-9e8f-0b27a2b7cd82] Pending
helpers_test.go:344: "sp-pod" [595929ce-533b-4e02-9e8f-0b27a2b7cd82] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [595929ce-533b-4e02-9e8f-0b27a2b7cd82] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.012322645s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-831303 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh -n functional-831303 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 cp functional-831303:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2039097940/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh -n functional-831303 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh -n functional-831303 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-831303 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-6tzv7" [17e25c0d-c83a-4bca-adeb-9b9a24e37ade] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-6tzv7" [17e25c0d-c83a-4bca-adeb-9b9a24e37ade] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.0033394s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-831303 exec mysql-6cdb49bbb-6tzv7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-831303 exec mysql-6cdb49bbb-6tzv7 -- mysql -ppassword -e "show databases;": exit status 1 (106.949386ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 18:09:33.547135   87188 retry.go:31] will retry after 1.277368881s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-831303 exec mysql-6cdb49bbb-6tzv7 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-831303 exec mysql-6cdb49bbb-6tzv7 -- mysql -ppassword -e "show databases;": exit status 1 (103.920906ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 18:09:34.929611   87188 retry.go:31] will retry after 864.237801ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-831303 exec mysql-6cdb49bbb-6tzv7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/87188/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo cat /etc/test/nested/copy/87188/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/87188.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo cat /etc/ssl/certs/87188.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/87188.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo cat /usr/share/ca-certificates/87188.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/871882.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo cat /etc/ssl/certs/871882.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/871882.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo cat /usr/share/ca-certificates/871882.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-831303 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-831303 ssh "sudo systemctl is-active crio": exit status 1 (272.545493ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-831303 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-831303 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-g8xgr" [fbe8bd85-471a-4037-81d3-5677defc402d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-g8xgr" [fbe8bd85-471a-4037-81d3-5677defc402d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.031671761s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-831303 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-831303 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-831303 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 135665: os: process already finished
helpers_test.go:508: unable to kill pid 135262: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-831303 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-831303 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-831303 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [cb5f8171-544e-4755-972a-7c1ef8e184d5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [cb5f8171-544e-4755-972a-7c1ef8e184d5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004071852s
I0920 18:09:02.445750   87188 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 service list -o json
functional_test.go:1494: Took "520.314631ms" to run "out/minikube-linux-amd64 -p functional-831303 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30237
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30237
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-831303 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.215.205 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-831303 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "365.922526ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "97.630273ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdany-port2823813550/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726855743617659888" to /tmp/TestFunctionalparallelMountCmdany-port2823813550/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726855743617659888" to /tmp/TestFunctionalparallelMountCmdany-port2823813550/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726855743617659888" to /tmp/TestFunctionalparallelMountCmdany-port2823813550/001/test-1726855743617659888
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.966437ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:09:03.938921   87188 retry.go:31] will retry after 322.181947ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 18:09 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 18:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 18:09 test-1726855743617659888
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh cat /mount-9p/test-1726855743617659888
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-831303 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [58697265-a04e-48f0-a809-7eac29829bf7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [58697265-a04e-48f0-a809-7eac29829bf7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [58697265-a04e-48f0-a809-7eac29829bf7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002948268s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-831303 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdany-port2823813550/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "403.233294ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "51.555324ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-831303 docker-env) && out/minikube-linux-amd64 status -p functional-831303"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-831303 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-831303 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-831303
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-831303
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-831303 image ls --format short --alsologtostderr:
I0920 18:09:15.611288  143824 out.go:345] Setting OutFile to fd 1 ...
I0920 18:09:15.611547  143824 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:15.611558  143824 out.go:358] Setting ErrFile to fd 2...
I0920 18:09:15.611564  143824 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:15.611741  143824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
I0920 18:09:15.612340  143824 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:15.612454  143824 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:15.612838  143824 cli_runner.go:164] Run: docker container inspect functional-831303 --format={{.State.Status}}
I0920 18:09:15.629491  143824 ssh_runner.go:195] Run: systemctl --version
I0920 18:09:15.629531  143824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831303
I0920 18:09:15.646041  143824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/functional-831303/id_rsa Username:docker}
I0920 18:09:15.743558  143824 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-831303 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| localhost/my-image                          | functional-831303 | c3243640e5aba | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kicbase/echo-server               | functional-831303 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-831303 | b8cfa27246ef6 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-831303 image ls --format table --alsologtostderr:
I0920 18:09:19.742430  144307 out.go:345] Setting OutFile to fd 1 ...
I0920 18:09:19.742762  144307 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:19.742775  144307 out.go:358] Setting ErrFile to fd 2...
I0920 18:09:19.742780  144307 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:19.743040  144307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
I0920 18:09:19.743915  144307 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:19.744059  144307 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:19.744608  144307 cli_runner.go:164] Run: docker container inspect functional-831303 --format={{.State.Status}}
I0920 18:09:19.764661  144307 ssh_runner.go:195] Run: systemctl --version
I0920 18:09:19.764703  144307 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831303
I0920 18:09:19.786549  144307 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/functional-831303/id_rsa Username:docker}
I0920 18:09:19.914701  144307 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-831303 image ls --format json --alsologtostderr:
[{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"c3243640e5aba3f995e96b58c6cc0c41480db9263f5279ce6282b01a76f99cd2","repoDigests":[],"repoTags":["localhost/my-image:functional-831303"],"size":"1240000"},{"id":"b8cfa27246ef6ec4d45fa7983e08e3bbde1d40b4787240aeb16b71266e4afb31","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-831303"],"size":"30"},{"id":"175ffd71cce3d90b
ae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":
[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-831303"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"438
00000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-831303 image ls --format json --alsologtostderr:
I0920 18:09:19.600453  144257 out.go:345] Setting OutFile to fd 1 ...
I0920 18:09:19.600577  144257 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:19.600589  144257 out.go:358] Setting ErrFile to fd 2...
I0920 18:09:19.600595  144257 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:19.600857  144257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
I0920 18:09:19.601668  144257 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:19.601868  144257 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:19.602297  144257 cli_runner.go:164] Run: docker container inspect functional-831303 --format={{.State.Status}}
I0920 18:09:19.621672  144257 ssh_runner.go:195] Run: systemctl --version
I0920 18:09:19.621731  144257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831303
I0920 18:09:19.641539  144257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/functional-831303/id_rsa Username:docker}
I0920 18:09:19.736221  144257 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-831303 image ls --format yaml --alsologtostderr:
- id: b8cfa27246ef6ec4d45fa7983e08e3bbde1d40b4787240aeb16b71266e4afb31
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-831303
size: "30"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-831303
size: "4940000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-831303 image ls --format yaml --alsologtostderr:
I0920 18:09:15.812706  143880 out.go:345] Setting OutFile to fd 1 ...
I0920 18:09:15.812815  143880 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:15.812824  143880 out.go:358] Setting ErrFile to fd 2...
I0920 18:09:15.812829  143880 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:15.813034  143880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
I0920 18:09:15.813645  143880 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:15.813738  143880 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:15.814102  143880 cli_runner.go:164] Run: docker container inspect functional-831303 --format={{.State.Status}}
I0920 18:09:15.830064  143880 ssh_runner.go:195] Run: systemctl --version
I0920 18:09:15.830101  143880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831303
I0920 18:09:15.846301  143880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/functional-831303/id_rsa Username:docker}
I0920 18:09:15.938785  143880 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-831303 ssh pgrep buildkitd: exit status 1 (227.976854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image build -t localhost/my-image:functional-831303 testdata/build --alsologtostderr
2024/09/20 18:09:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-831303 image build -t localhost/my-image:functional-831303 testdata/build --alsologtostderr: (3.277563138s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-831303 image build -t localhost/my-image:functional-831303 testdata/build --alsologtostderr:
I0920 18:09:16.234164  144026 out.go:345] Setting OutFile to fd 1 ...
I0920 18:09:16.234288  144026 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:16.234298  144026 out.go:358] Setting ErrFile to fd 2...
I0920 18:09:16.234305  144026 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 18:09:16.234582  144026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
I0920 18:09:16.235218  144026 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:16.235794  144026 config.go:182] Loaded profile config "functional-831303": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 18:09:16.236236  144026 cli_runner.go:164] Run: docker container inspect functional-831303 --format={{.State.Status}}
I0920 18:09:16.252939  144026 ssh_runner.go:195] Run: systemctl --version
I0920 18:09:16.252978  144026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-831303
I0920 18:09:16.270140  144026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/functional-831303/id_rsa Username:docker}
I0920 18:09:16.358958  144026 build_images.go:161] Building image from path: /tmp/build.3493430608.tar
I0920 18:09:16.359014  144026 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 18:09:16.366941  144026 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3493430608.tar
I0920 18:09:16.369845  144026 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3493430608.tar: stat -c "%s %y" /var/lib/minikube/build/build.3493430608.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3493430608.tar': No such file or directory
I0920 18:09:16.369874  144026 ssh_runner.go:362] scp /tmp/build.3493430608.tar --> /var/lib/minikube/build/build.3493430608.tar (3072 bytes)
I0920 18:09:16.391233  144026 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3493430608
I0920 18:09:16.398432  144026 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3493430608 -xf /var/lib/minikube/build/build.3493430608.tar
I0920 18:09:16.405950  144026 docker.go:360] Building image: /var/lib/minikube/build/build.3493430608
I0920 18:09:16.406009  144026 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-831303 /var/lib/minikube/build/build.3493430608
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.8s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:c3243640e5aba3f995e96b58c6cc0c41480db9263f5279ce6282b01a76f99cd2 done
#8 naming to localhost/my-image:functional-831303 done
#8 DONE 0.0s
I0920 18:09:19.447480  144026 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-831303 /var/lib/minikube/build/build.3493430608: (3.041433891s)
I0920 18:09:19.447570  144026 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3493430608
I0920 18:09:19.456308  144026 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3493430608.tar
I0920 18:09:19.465980  144026 build_images.go:217] Built localhost/my-image:functional-831303 from /tmp/build.3493430608.tar
I0920 18:09:19.466010  144026 build_images.go:133] succeeded building to: functional-831303
I0920 18:09:19.466016  144026 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.526349527s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-831303
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image load --daemon kicbase/echo-server:functional-831303 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image load --daemon kicbase/echo-server:functional-831303 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdspecific-port3862904503/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (319.963805ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:09:10.454292   87188 retry.go:31] will retry after 260.858262ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdspecific-port3862904503/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-831303 ssh "sudo umount -f /mount-9p": exit status 1 (287.873887ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-831303 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdspecific-port3862904503/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-831303
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image load --daemon kicbase/echo-server:functional-831303 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4188887088/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4188887088/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4188887088/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T" /mount1: exit status 1 (387.382593ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 18:09:12.215405   87188 retry.go:31] will retry after 671.956995ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-831303 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4188887088/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4188887088/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-831303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4188887088/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image save kicbase/echo-server:functional-831303 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image rm kicbase/echo-server:functional-831303 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-831303
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-831303 image save --daemon kicbase/echo-server:functional-831303 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-831303
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.57s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-831303
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-831303
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-831303
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (96.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-735738 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 18:10:22.319953   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:22.326334   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:22.337720   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:22.359768   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:22.401207   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:22.482568   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:22.644134   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:22.966158   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:23.607644   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:24.889757   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:27.452007   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:32.573788   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:10:42.815759   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:11:03.297797   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-735738 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m36.31077469s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (96.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-735738 -- rollout status deployment/busybox: (2.84092005s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-cgltg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-pxcdp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-r7k66 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-cgltg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-pxcdp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-r7k66 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-cgltg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-pxcdp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-r7k66 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-cgltg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-cgltg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-pxcdp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-pxcdp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-r7k66 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-735738 -- exec busybox-7dff88458-r7k66 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-735738 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-735738 -v=7 --alsologtostderr: (22.507385142s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr
E0920 18:11:44.260123   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-735738 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp testdata/cp-test.txt ha-735738:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3454759335/001/cp-test_ha-735738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738:/home/docker/cp-test.txt ha-735738-m02:/home/docker/cp-test_ha-735738_ha-735738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m02 "sudo cat /home/docker/cp-test_ha-735738_ha-735738-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738:/home/docker/cp-test.txt ha-735738-m03:/home/docker/cp-test_ha-735738_ha-735738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m03 "sudo cat /home/docker/cp-test_ha-735738_ha-735738-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738:/home/docker/cp-test.txt ha-735738-m04:/home/docker/cp-test_ha-735738_ha-735738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m04 "sudo cat /home/docker/cp-test_ha-735738_ha-735738-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp testdata/cp-test.txt ha-735738-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3454759335/001/cp-test_ha-735738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m02:/home/docker/cp-test.txt ha-735738:/home/docker/cp-test_ha-735738-m02_ha-735738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738 "sudo cat /home/docker/cp-test_ha-735738-m02_ha-735738.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m02:/home/docker/cp-test.txt ha-735738-m03:/home/docker/cp-test_ha-735738-m02_ha-735738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m03 "sudo cat /home/docker/cp-test_ha-735738-m02_ha-735738-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m02:/home/docker/cp-test.txt ha-735738-m04:/home/docker/cp-test_ha-735738-m02_ha-735738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m04 "sudo cat /home/docker/cp-test_ha-735738-m02_ha-735738-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp testdata/cp-test.txt ha-735738-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3454759335/001/cp-test_ha-735738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m03:/home/docker/cp-test.txt ha-735738:/home/docker/cp-test_ha-735738-m03_ha-735738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738 "sudo cat /home/docker/cp-test_ha-735738-m03_ha-735738.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m03:/home/docker/cp-test.txt ha-735738-m02:/home/docker/cp-test_ha-735738-m03_ha-735738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m02 "sudo cat /home/docker/cp-test_ha-735738-m03_ha-735738-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m03:/home/docker/cp-test.txt ha-735738-m04:/home/docker/cp-test_ha-735738-m03_ha-735738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m04 "sudo cat /home/docker/cp-test_ha-735738-m03_ha-735738-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp testdata/cp-test.txt ha-735738-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3454759335/001/cp-test_ha-735738-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m04:/home/docker/cp-test.txt ha-735738:/home/docker/cp-test_ha-735738-m04_ha-735738.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738 "sudo cat /home/docker/cp-test_ha-735738-m04_ha-735738.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m04:/home/docker/cp-test.txt ha-735738-m02:/home/docker/cp-test_ha-735738-m04_ha-735738-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m02 "sudo cat /home/docker/cp-test_ha-735738-m04_ha-735738-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 cp ha-735738-m04:/home/docker/cp-test.txt ha-735738-m03:/home/docker/cp-test_ha-735738-m04_ha-735738-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 ssh -n ha-735738-m03 "sudo cat /home/docker/cp-test_ha-735738-m04_ha-735738-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-735738 node stop m02 -v=7 --alsologtostderr: (10.687712456s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr: exit status 7 (635.267165ms)

                                                
                                                
-- stdout --
	ha-735738
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-735738-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-735738-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-735738-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:12:11.814227  171830 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:12:11.814346  171830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:11.814356  171830 out.go:358] Setting ErrFile to fd 2...
	I0920 18:12:11.814362  171830 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:12:11.814585  171830 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
	I0920 18:12:11.814800  171830 out.go:352] Setting JSON to false
	I0920 18:12:11.814843  171830 mustload.go:65] Loading cluster: ha-735738
	I0920 18:12:11.814943  171830 notify.go:220] Checking for updates...
	I0920 18:12:11.815296  171830 config.go:182] Loaded profile config "ha-735738": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:12:11.815320  171830 status.go:174] checking status of ha-735738 ...
	I0920 18:12:11.815727  171830 cli_runner.go:164] Run: docker container inspect ha-735738 --format={{.State.Status}}
	I0920 18:12:11.834961  171830 status.go:364] ha-735738 host status = "Running" (err=<nil>)
	I0920 18:12:11.835010  171830 host.go:66] Checking if "ha-735738" exists ...
	I0920 18:12:11.835261  171830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-735738
	I0920 18:12:11.853953  171830 host.go:66] Checking if "ha-735738" exists ...
	I0920 18:12:11.854202  171830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:12:11.854265  171830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-735738
	I0920 18:12:11.871024  171830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/ha-735738/id_rsa Username:docker}
	I0920 18:12:11.963242  171830 ssh_runner.go:195] Run: systemctl --version
	I0920 18:12:11.967037  171830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:12:11.978304  171830 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:12:12.024640  171830 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-20 18:12:12.015469425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:12:12.025201  171830 kubeconfig.go:125] found "ha-735738" server: "https://192.168.49.254:8443"
	I0920 18:12:12.025231  171830 api_server.go:166] Checking apiserver status ...
	I0920 18:12:12.025263  171830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:12:12.036167  171830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2415/cgroup
	I0920 18:12:12.044350  171830 api_server.go:182] apiserver freezer: "3:freezer:/docker/fea44e94fed6c8a48c6e54b1106ba0ac13c7ad95c16aa53aee63bd194618d61e/kubepods/burstable/poddffb7b0031e9649c650236ab5baa5775/5a462cab18215c257917ad1f2c7fa298a0272b98df4673e5ec6a6acef7a7e49c"
	I0920 18:12:12.044405  171830 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fea44e94fed6c8a48c6e54b1106ba0ac13c7ad95c16aa53aee63bd194618d61e/kubepods/burstable/poddffb7b0031e9649c650236ab5baa5775/5a462cab18215c257917ad1f2c7fa298a0272b98df4673e5ec6a6acef7a7e49c/freezer.state
	I0920 18:12:12.051770  171830 api_server.go:204] freezer state: "THAWED"
	I0920 18:12:12.051796  171830 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 18:12:12.055389  171830 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 18:12:12.055409  171830 status.go:456] ha-735738 apiserver status = Running (err=<nil>)
	I0920 18:12:12.055432  171830 status.go:176] ha-735738 status: &{Name:ha-735738 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:12:12.055456  171830 status.go:174] checking status of ha-735738-m02 ...
	I0920 18:12:12.055681  171830 cli_runner.go:164] Run: docker container inspect ha-735738-m02 --format={{.State.Status}}
	I0920 18:12:12.072008  171830 status.go:364] ha-735738-m02 host status = "Stopped" (err=<nil>)
	I0920 18:12:12.072027  171830 status.go:377] host is not running, skipping remaining checks
	I0920 18:12:12.072033  171830 status.go:176] ha-735738-m02 status: &{Name:ha-735738-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:12:12.072054  171830 status.go:174] checking status of ha-735738-m03 ...
	I0920 18:12:12.072305  171830 cli_runner.go:164] Run: docker container inspect ha-735738-m03 --format={{.State.Status}}
	I0920 18:12:12.088792  171830 status.go:364] ha-735738-m03 host status = "Running" (err=<nil>)
	I0920 18:12:12.088814  171830 host.go:66] Checking if "ha-735738-m03" exists ...
	I0920 18:12:12.089056  171830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-735738-m03
	I0920 18:12:12.106494  171830 host.go:66] Checking if "ha-735738-m03" exists ...
	I0920 18:12:12.106844  171830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:12:12.106889  171830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-735738-m03
	I0920 18:12:12.123398  171830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/ha-735738-m03/id_rsa Username:docker}
	I0920 18:12:12.215003  171830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:12:12.225388  171830 kubeconfig.go:125] found "ha-735738" server: "https://192.168.49.254:8443"
	I0920 18:12:12.225419  171830 api_server.go:166] Checking apiserver status ...
	I0920 18:12:12.225456  171830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:12:12.235027  171830 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2268/cgroup
	I0920 18:12:12.242938  171830 api_server.go:182] apiserver freezer: "3:freezer:/docker/5224f793f114ee5e11fbbbf214bc6dcf67d786a85ce9cd6ed245997d78449034/kubepods/burstable/pod7317a912e4bd4826b1998fead5c6eac0/ab8dbebe8ec0293c3eb453ecf11508bc3384235b93b72a2f2c9c6a9a1cbea7ad"
	I0920 18:12:12.243002  171830 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5224f793f114ee5e11fbbbf214bc6dcf67d786a85ce9cd6ed245997d78449034/kubepods/burstable/pod7317a912e4bd4826b1998fead5c6eac0/ab8dbebe8ec0293c3eb453ecf11508bc3384235b93b72a2f2c9c6a9a1cbea7ad/freezer.state
	I0920 18:12:12.250457  171830 api_server.go:204] freezer state: "THAWED"
	I0920 18:12:12.250484  171830 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 18:12:12.254161  171830 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 18:12:12.254184  171830 status.go:456] ha-735738-m03 apiserver status = Running (err=<nil>)
	I0920 18:12:12.254195  171830 status.go:176] ha-735738-m03 status: &{Name:ha-735738-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:12:12.254210  171830 status.go:174] checking status of ha-735738-m04 ...
	I0920 18:12:12.254456  171830 cli_runner.go:164] Run: docker container inspect ha-735738-m04 --format={{.State.Status}}
	I0920 18:12:12.270928  171830 status.go:364] ha-735738-m04 host status = "Running" (err=<nil>)
	I0920 18:12:12.270948  171830 host.go:66] Checking if "ha-735738-m04" exists ...
	I0920 18:12:12.271216  171830 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-735738-m04
	I0920 18:12:12.287495  171830 host.go:66] Checking if "ha-735738-m04" exists ...
	I0920 18:12:12.287735  171830 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:12:12.287768  171830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-735738-m04
	I0920 18:12:12.304523  171830 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/ha-735738-m04/id_rsa Username:docker}
	I0920 18:12:12.395040  171830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:12:12.404791  171830 status.go:176] ha-735738-m04 status: &{Name:ha-735738-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (34.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-735738 node start m02 -v=7 --alsologtostderr: (33.659020307s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (34.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (230.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-735738 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-735738 -v=7 --alsologtostderr
E0920 18:13:06.181923   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-735738 -v=7 --alsologtostderr: (33.592101741s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-735738 --wait=true -v=7 --alsologtostderr
E0920 18:13:51.403135   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:51.409577   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:51.420890   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:51.442235   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:51.483623   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:51.565040   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:51.726572   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:52.048171   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:52.690409   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:53.972015   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:13:56.533776   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:14:01.655732   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:14:11.897415   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:14:32.379762   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:13.342015   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:22.320609   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:15:50.023433   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:16:35.263414   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-735738 --wait=true -v=7 --alsologtostderr: (3m16.373794332s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-735738
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (230.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-735738 node delete m03 -v=7 --alsologtostderr: (8.423175708s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-735738 stop -v=7 --alsologtostderr: (32.104008475s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr: exit status 7 (94.625893ms)

                                                
                                                
-- stdout --
	ha-735738
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-735738-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-735738-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:17:20.437998  203292 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:17:20.438116  203292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:17:20.438126  203292 out.go:358] Setting ErrFile to fd 2...
	I0920 18:17:20.438131  203292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:17:20.438354  203292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
	I0920 18:17:20.438563  203292 out.go:352] Setting JSON to false
	I0920 18:17:20.438599  203292 mustload.go:65] Loading cluster: ha-735738
	I0920 18:17:20.438643  203292 notify.go:220] Checking for updates...
	I0920 18:17:20.438994  203292 config.go:182] Loaded profile config "ha-735738": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:17:20.439016  203292 status.go:174] checking status of ha-735738 ...
	I0920 18:17:20.439425  203292 cli_runner.go:164] Run: docker container inspect ha-735738 --format={{.State.Status}}
	I0920 18:17:20.455787  203292 status.go:364] ha-735738 host status = "Stopped" (err=<nil>)
	I0920 18:17:20.455806  203292 status.go:377] host is not running, skipping remaining checks
	I0920 18:17:20.455812  203292 status.go:176] ha-735738 status: &{Name:ha-735738 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:17:20.455838  203292 status.go:174] checking status of ha-735738-m02 ...
	I0920 18:17:20.456057  203292 cli_runner.go:164] Run: docker container inspect ha-735738-m02 --format={{.State.Status}}
	I0920 18:17:20.471316  203292 status.go:364] ha-735738-m02 host status = "Stopped" (err=<nil>)
	I0920 18:17:20.471332  203292 status.go:377] host is not running, skipping remaining checks
	I0920 18:17:20.471339  203292 status.go:176] ha-735738-m02 status: &{Name:ha-735738-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:17:20.471359  203292 status.go:174] checking status of ha-735738-m04 ...
	I0920 18:17:20.471576  203292 cli_runner.go:164] Run: docker container inspect ha-735738-m04 --format={{.State.Status}}
	I0920 18:17:20.487045  203292 status.go:364] ha-735738-m04 host status = "Stopped" (err=<nil>)
	I0920 18:17:20.487066  203292 status.go:377] host is not running, skipping remaining checks
	I0920 18:17:20.487074  203292 status.go:176] ha-735738-m04 status: &{Name:ha-735738-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (65.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-735738 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-735738 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m5.009269776s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (65.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-735738 --control-plane -v=7 --alsologtostderr
E0920 18:18:51.403446   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-735738 --control-plane -v=7 --alsologtostderr: (37.541512608s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-735738 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-450813 --driver=docker  --container-runtime=docker
E0920 18:19:19.106674   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-450813 --driver=docker  --container-runtime=docker: (20.352419055s)
--- PASS: TestImageBuild/serial/Setup (20.35s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-450813
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-450813: (1.595192505s)
--- PASS: TestImageBuild/serial/NormalBuild (1.60s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-450813
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-450813
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-450813
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.73s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-389396 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
E0920 18:20:22.319933   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-389396 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m5.741024094s)
--- PASS: TestJSONOutput/start/Command (65.74s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-389396 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-389396 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-389396 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-389396 --output=json --user=testUser: (5.711713238s)
--- PASS: TestJSONOutput/stop/Command (5.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-148817 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-148817 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (60.888803ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b4f07922-6ced-4d5f-9539-374339c7ca6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-148817] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dffa5e71-1a09-4881-909b-3e53c4a0fdcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"9252a596-0f02-4d66-af0e-207d15cf5887","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e14f05fc-1645-43db-aff7-8f49214c12ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig"}}
	{"specversion":"1.0","id":"b4018471-ea96-4a79-aafa-8c3861be867a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube"}}
	{"specversion":"1.0","id":"d192c1bf-1158-476b-b642-f4f581cc8de8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5cd34816-89fb-4a80-8395-48fb6b7a5f1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5eda59a4-cad9-47ea-b09f-564452c9d5c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-148817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-148817
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-128488 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-128488 --network=: (23.291193358s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-128488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-128488
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-128488: (2.034071409s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.34s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-423127 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-423127 --network=bridge: (20.687646476s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-423127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-423127
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-423127: (1.846283979s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.55s)

                                                
                                    
x
+
TestKicExistingNetwork (22.55s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 18:21:42.100833   87188 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 18:21:42.116537   87188 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 18:21:42.116611   87188 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 18:21:42.116634   87188 cli_runner.go:164] Run: docker network inspect existing-network
W0920 18:21:42.131988   87188 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 18:21:42.132025   87188 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 18:21:42.132040   87188 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 18:21:42.132143   87188 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 18:21:42.147582   87188 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7732a47c24a4 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:1f:5e:c6:ec} reservation:<nil>}
I0920 18:21:42.148078   87188 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a931c0}
I0920 18:21:42.148112   87188 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 18:21:42.148149   87188 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 18:21:42.205224   87188 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-252433 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-252433 --network=existing-network: (20.523516116s)
helpers_test.go:175: Cleaning up "existing-network-252433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-252433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-252433: (1.893847652s)
I0920 18:22:04.637801   87188 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.55s)

                                                
                                    
x
+
TestKicCustomSubnet (22.62s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-278866 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-278866 --subnet=192.168.60.0/24: (20.5609425s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-278866 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-278866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-278866
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-278866: (2.042556567s)
--- PASS: TestKicCustomSubnet (22.62s)

                                                
                                    
x
+
TestKicStaticIP (22.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-289209 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-289209 --static-ip=192.168.200.200: (20.540278579s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-289209 ip
helpers_test.go:175: Cleaning up "static-ip-289209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-289209
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-289209: (1.998398246s)
--- PASS: TestKicStaticIP (22.66s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (51.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-661346 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-661346 --driver=docker  --container-runtime=docker: (24.134596402s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-672934 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-672934 --driver=docker  --container-runtime=docker: (22.086317412s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-661346
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-672934
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-672934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-672934
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-672934: (2.011051297s)
helpers_test.go:175: Cleaning up "first-661346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-661346
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-661346: (1.954902027s)
--- PASS: TestMinikubeProfile (51.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.62s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-971912 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-971912 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (5.619249098s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-971912 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-986090 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0920 18:23:51.404451   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-986090 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.558292749s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-986090 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-971912 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-971912 --alsologtostderr -v=5: (1.436761046s)
--- PASS: TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-986090 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-986090
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-986090: (1.16160505s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-986090
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-986090: (6.8682354s)
--- PASS: TestMountStart/serial/RestartStopped (7.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-986090 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269429 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-269429 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m7.438072446s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (36.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-269429 -- rollout status deployment/busybox: (2.396841511s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:25:21.039939   87188 retry.go:31] will retry after 1.256669194s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].status.podIP}'
E0920 18:25:22.319714   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:25:22.402722   87188 retry.go:31] will retry after 1.708746979s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:25:24.218840   87188 retry.go:31] will retry after 2.883116709s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:25:27.209459   87188 retry.go:31] will retry after 4.014320766s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:25:31.334004   87188 retry.go:31] will retry after 7.220621355s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:25:38.659207   87188 retry.go:31] will retry after 8.188609056s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 18:25:46.962305   87188 retry.go:31] will retry after 6.718966592s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-cgktw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-h55mq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-cgktw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-h55mq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-cgktw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-h55mq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (36.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-cgktw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-cgktw -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-h55mq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-269429 -- exec busybox-7dff88458-h55mq -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-269429 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-269429 -v 3 --alsologtostderr: (14.499792244s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.17s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-269429 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp testdata/cp-test.txt multinode-269429:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp multinode-269429:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile820551971/001/cp-test_multinode-269429.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp multinode-269429:/home/docker/cp-test.txt multinode-269429-m02:/home/docker/cp-test_multinode-269429_multinode-269429-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m02 "sudo cat /home/docker/cp-test_multinode-269429_multinode-269429-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp multinode-269429:/home/docker/cp-test.txt multinode-269429-m03:/home/docker/cp-test_multinode-269429_multinode-269429-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m03 "sudo cat /home/docker/cp-test_multinode-269429_multinode-269429-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp testdata/cp-test.txt multinode-269429-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp multinode-269429-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile820551971/001/cp-test_multinode-269429-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp multinode-269429-m02:/home/docker/cp-test.txt multinode-269429:/home/docker/cp-test_multinode-269429-m02_multinode-269429.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429 "sudo cat /home/docker/cp-test_multinode-269429-m02_multinode-269429.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp multinode-269429-m02:/home/docker/cp-test.txt multinode-269429-m03:/home/docker/cp-test_multinode-269429-m02_multinode-269429-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m03 "sudo cat /home/docker/cp-test_multinode-269429-m02_multinode-269429-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp testdata/cp-test.txt multinode-269429-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp multinode-269429-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile820551971/001/cp-test_multinode-269429-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp multinode-269429-m03:/home/docker/cp-test.txt multinode-269429:/home/docker/cp-test_multinode-269429-m03_multinode-269429.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429 "sudo cat /home/docker/cp-test_multinode-269429-m03_multinode-269429.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 cp multinode-269429-m03:/home/docker/cp-test.txt multinode-269429-m02:/home/docker/cp-test_multinode-269429-m03_multinode-269429-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 ssh -n multinode-269429-m02 "sudo cat /home/docker/cp-test_multinode-269429-m03_multinode-269429-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-269429 node stop m03: (1.162145759s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-269429 status: exit status 7 (434.546271ms)

                                                
                                                
-- stdout --
	multinode-269429
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-269429-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-269429-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-269429 status --alsologtostderr: exit status 7 (438.138749ms)

                                                
                                                
-- stdout --
	multinode-269429
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-269429-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-269429-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:26:21.654087  290356 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:26:21.654197  290356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:26:21.654205  290356 out.go:358] Setting ErrFile to fd 2...
	I0920 18:26:21.654209  290356 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:26:21.654381  290356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
	I0920 18:26:21.654601  290356 out.go:352] Setting JSON to false
	I0920 18:26:21.654636  290356 mustload.go:65] Loading cluster: multinode-269429
	I0920 18:26:21.654741  290356 notify.go:220] Checking for updates...
	I0920 18:26:21.655019  290356 config.go:182] Loaded profile config "multinode-269429": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:26:21.655037  290356 status.go:174] checking status of multinode-269429 ...
	I0920 18:26:21.655431  290356 cli_runner.go:164] Run: docker container inspect multinode-269429 --format={{.State.Status}}
	I0920 18:26:21.672394  290356 status.go:364] multinode-269429 host status = "Running" (err=<nil>)
	I0920 18:26:21.672418  290356 host.go:66] Checking if "multinode-269429" exists ...
	I0920 18:26:21.672630  290356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-269429
	I0920 18:26:21.688602  290356 host.go:66] Checking if "multinode-269429" exists ...
	I0920 18:26:21.688903  290356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:26:21.688941  290356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-269429
	I0920 18:26:21.705368  290356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/multinode-269429/id_rsa Username:docker}
	I0920 18:26:21.794999  290356 ssh_runner.go:195] Run: systemctl --version
	I0920 18:26:21.798636  290356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:26:21.808314  290356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 18:26:21.853718  290356 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-20 18:26:21.84438972 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-3 Labels:[] ExperimentalBuild:false ServerVersion:27.3.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0920 18:26:21.854332  290356 kubeconfig.go:125] found "multinode-269429" server: "https://192.168.67.2:8443"
	I0920 18:26:21.854369  290356 api_server.go:166] Checking apiserver status ...
	I0920 18:26:21.854422  290356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 18:26:21.864912  290356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2328/cgroup
	I0920 18:26:21.872678  290356 api_server.go:182] apiserver freezer: "3:freezer:/docker/0b78ca274ac76c62f0f503ce70e9f219b5747af9014319c4762912b3027f1ecd/kubepods/burstable/pod5d18c385bd9322d5a2ae5d5a08d914e0/efb35cd31744cc0fb94abef24c8134b6bf20edc9cc28bf964cbb317dd3c3859a"
	I0920 18:26:21.872729  290356 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0b78ca274ac76c62f0f503ce70e9f219b5747af9014319c4762912b3027f1ecd/kubepods/burstable/pod5d18c385bd9322d5a2ae5d5a08d914e0/efb35cd31744cc0fb94abef24c8134b6bf20edc9cc28bf964cbb317dd3c3859a/freezer.state
	I0920 18:26:21.879942  290356 api_server.go:204] freezer state: "THAWED"
	I0920 18:26:21.879970  290356 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 18:26:21.883540  290356 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 18:26:21.883563  290356 status.go:456] multinode-269429 apiserver status = Running (err=<nil>)
	I0920 18:26:21.883577  290356 status.go:176] multinode-269429 status: &{Name:multinode-269429 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:26:21.883615  290356 status.go:174] checking status of multinode-269429-m02 ...
	I0920 18:26:21.883857  290356 cli_runner.go:164] Run: docker container inspect multinode-269429-m02 --format={{.State.Status}}
	I0920 18:26:21.899904  290356 status.go:364] multinode-269429-m02 host status = "Running" (err=<nil>)
	I0920 18:26:21.899924  290356 host.go:66] Checking if "multinode-269429-m02" exists ...
	I0920 18:26:21.900151  290356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-269429-m02
	I0920 18:26:21.915462  290356 host.go:66] Checking if "multinode-269429-m02" exists ...
	I0920 18:26:21.915702  290356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 18:26:21.915735  290356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-269429-m02
	I0920 18:26:21.931558  290356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19678-80428/.minikube/machines/multinode-269429-m02/id_rsa Username:docker}
	I0920 18:26:22.022912  290356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 18:26:22.032543  290356 status.go:176] multinode-269429-m02 status: &{Name:multinode-269429-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:26:22.032585  290356 status.go:174] checking status of multinode-269429-m03 ...
	I0920 18:26:22.032897  290356 cli_runner.go:164] Run: docker container inspect multinode-269429-m03 --format={{.State.Status}}
	I0920 18:26:22.049352  290356 status.go:364] multinode-269429-m03 host status = "Stopped" (err=<nil>)
	I0920 18:26:22.049382  290356 status.go:377] host is not running, skipping remaining checks
	I0920 18:26:22.049389  290356 status.go:176] multinode-269429-m03 status: &{Name:multinode-269429-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-269429 node start m03 -v=7 --alsologtostderr: (8.919392916s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (109.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-269429
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-269429
E0920 18:26:45.385072   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-269429: (22.18819179s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269429 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-269429 --wait=true -v=8 --alsologtostderr: (1m26.997724277s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-269429
--- PASS: TestMultiNode/serial/RestartKeepsNodes (109.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-269429 node delete m03: (4.550371855s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-269429 stop: (21.185691451s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-269429 status: exit status 7 (106.461287ms)

                                                
                                                
-- stdout --
	multinode-269429
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-269429-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-269429 status --alsologtostderr: exit status 7 (78.834537ms)

                                                
                                                
-- stdout --
	multinode-269429
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-269429-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 18:28:47.300756  305823 out.go:345] Setting OutFile to fd 1 ...
	I0920 18:28:47.301021  305823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:28:47.301032  305823 out.go:358] Setting ErrFile to fd 2...
	I0920 18:28:47.301036  305823 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 18:28:47.301204  305823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-80428/.minikube/bin
	I0920 18:28:47.301377  305823 out.go:352] Setting JSON to false
	I0920 18:28:47.301407  305823 mustload.go:65] Loading cluster: multinode-269429
	I0920 18:28:47.301532  305823 notify.go:220] Checking for updates...
	I0920 18:28:47.301852  305823 config.go:182] Loaded profile config "multinode-269429": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 18:28:47.301875  305823 status.go:174] checking status of multinode-269429 ...
	I0920 18:28:47.302404  305823 cli_runner.go:164] Run: docker container inspect multinode-269429 --format={{.State.Status}}
	I0920 18:28:47.319488  305823 status.go:364] multinode-269429 host status = "Stopped" (err=<nil>)
	I0920 18:28:47.319527  305823 status.go:377] host is not running, skipping remaining checks
	I0920 18:28:47.319538  305823 status.go:176] multinode-269429 status: &{Name:multinode-269429 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 18:28:47.319571  305823 status.go:174] checking status of multinode-269429-m02 ...
	I0920 18:28:47.319883  305823 cli_runner.go:164] Run: docker container inspect multinode-269429-m02 --format={{.State.Status}}
	I0920 18:28:47.336819  305823 status.go:364] multinode-269429-m02 host status = "Stopped" (err=<nil>)
	I0920 18:28:47.336839  305823 status.go:377] host is not running, skipping remaining checks
	I0920 18:28:47.336845  305823 status.go:176] multinode-269429-m02 status: &{Name:multinode-269429-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269429 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0920 18:28:51.402529   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-269429 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (51.176902381s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-269429 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.72s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-269429
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269429-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-269429-m02 --driver=docker  --container-runtime=docker: exit status 14 (58.987505ms)

                                                
                                                
-- stdout --
	* [multinode-269429-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-269429-m02' is duplicated with machine name 'multinode-269429-m02' in profile 'multinode-269429'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-269429-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-269429-m03 --driver=docker  --container-runtime=docker: (20.775946375s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-269429
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-269429: exit status 80 (258.534629ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-269429 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-269429-m03 already exists in multinode-269429-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-269429-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-269429-m03: (2.007932378s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.14s)

                                                
                                    
x
+
TestPreload (122.21s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-643292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0920 18:30:14.468519   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:30:22.320684   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-643292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m31.799292462s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-643292 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-643292 image pull gcr.io/k8s-minikube/busybox: (1.355297631s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-643292
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-643292: (10.68701006s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-643292 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-643292 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (16.11417058s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-643292 image list
helpers_test.go:175: Cleaning up "test-preload-643292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-643292
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-643292: (2.045667178s)
--- PASS: TestPreload (122.21s)

                                                
                                    
x
+
TestScheduledStopUnix (93.6s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-742202 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-742202 --memory=2048 --driver=docker  --container-runtime=docker: (20.795519202s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-742202 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-742202 -n scheduled-stop-742202
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-742202 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 18:32:29.294525   87188 retry.go:31] will retry after 123.12µs: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.295678   87188 retry.go:31] will retry after 144.806µs: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.296842   87188 retry.go:31] will retry after 197.722µs: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.297968   87188 retry.go:31] will retry after 216.558µs: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.299108   87188 retry.go:31] will retry after 294.167µs: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.300239   87188 retry.go:31] will retry after 514.399µs: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.301388   87188 retry.go:31] will retry after 1.233106ms: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.303625   87188 retry.go:31] will retry after 1.830706ms: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.305863   87188 retry.go:31] will retry after 3.254272ms: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.310075   87188 retry.go:31] will retry after 2.442448ms: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.313292   87188 retry.go:31] will retry after 5.021386ms: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.318428   87188 retry.go:31] will retry after 6.276738ms: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.325675   87188 retry.go:31] will retry after 10.752692ms: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.336875   87188 retry.go:31] will retry after 22.268899ms: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
I0920 18:32:29.360128   87188 retry.go:31] will retry after 25.122008ms: open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/scheduled-stop-742202/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-742202 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-742202 -n scheduled-stop-742202
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-742202
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-742202 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-742202
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-742202: exit status 7 (62.220179ms)

                                                
                                                
-- stdout --
	scheduled-stop-742202
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-742202 -n scheduled-stop-742202
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-742202 -n scheduled-stop-742202: exit status 7 (60.731923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-742202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-742202
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-742202: (1.562373441s)
--- PASS: TestScheduledStopUnix (93.60s)

                                                
                                    
x
+
TestSkaffold (98.85s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe2231921754 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-716249 --memory=2600 --driver=docker  --container-runtime=docker
E0920 18:33:51.406714   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-716249 --memory=2600 --driver=docker  --container-runtime=docker: (23.941364767s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe2231921754 run --minikube-profile skaffold-716249 --kube-context skaffold-716249 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe2231921754 run --minikube-profile skaffold-716249 --kube-context skaffold-716249 --status-check=true --port-forward=false --interactive=false: (1m0.510576123s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-f5679b677-6c9k4" [1f76c638-9265-4730-8171-a081f39e8e4d] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003382263s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7db956b648-r47wf" [06bdfff3-acbd-439e-9a49-e8f1b8b1e036] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003019561s
helpers_test.go:175: Cleaning up "skaffold-716249" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-716249
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-716249: (2.71243833s)
--- PASS: TestSkaffold (98.85s)

                                                
                                    
x
+
TestInsufficientStorage (9.42s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-051217 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0920 18:35:22.319997   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-051217 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.340247506s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8e6a6bda-2a1b-4618-bc49-dce6757fd981","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-051217] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca83d578-a176-4b74-b037-6cb525681969","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"7abbd67c-074c-484c-a9a7-7d1ef0beec31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"eec81b3d-4dd6-4c70-8fbb-f9560bde338c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig"}}
	{"specversion":"1.0","id":"0ac61ff1-4491-4a4f-afc4-6e8af36d8ff6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube"}}
	{"specversion":"1.0","id":"7285a642-d846-42c0-96e7-8732fa68fffc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8d377525-a4bf-4af1-97c5-f2ecc4900e38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6fad2344-7fdc-4f45-aa3f-62d8d9d6752c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"7ae937ac-fad6-48e1-b458-a3a25dd59f72","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3dc48254-194a-497a-a807-6418305ace2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"971e7bcd-b13a-4540-b4d1-71d1ae295bde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e956ac79-c8f7-4f26-9024-b29b07062ef1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-051217\" primary control-plane node in \"insufficient-storage-051217\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"31ceaa36-9c65-4af9-a0cd-0e835481233f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"77992ce2-5d71-4fb7-a2a5-2092a09b3d03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"323140d1-a32e-4d3d-bf02-d9e567849266","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-051217 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-051217 --output=json --layout=cluster: exit status 7 (248.982265ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-051217","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-051217","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:35:28.136363  345880 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-051217" does not appear in /home/jenkins/minikube-integration/19678-80428/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-051217 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-051217 --output=json --layout=cluster: exit status 7 (244.666605ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-051217","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-051217","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 18:35:28.381520  345974 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-051217" does not appear in /home/jenkins/minikube-integration/19678-80428/kubeconfig
	E0920 18:35:28.390694  345974 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/insufficient-storage-051217/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-051217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-051217
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-051217: (1.589237905s)
--- PASS: TestInsufficientStorage (9.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (79.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1382988493 start -p running-upgrade-135921 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1382988493 start -p running-upgrade-135921 --memory=2200 --vm-driver=docker  --container-runtime=docker: (31.132584814s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-135921 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-135921 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.270141839s)
helpers_test.go:175: Cleaning up "running-upgrade-135921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-135921
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-135921: (2.704732807s)
--- PASS: TestRunningBinaryUpgrade (79.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (323.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-195702 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-195702 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.808227268s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-195702
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-195702: (1.213161096s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-195702 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-195702 status --format={{.Host}}: exit status 7 (76.233713ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-195702 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-195702 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m26.232861934s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-195702 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-195702 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-195702 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (62.471049ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-195702] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-195702
	    minikube start -p kubernetes-upgrade-195702 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1957022 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-195702 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-195702 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 18:42:50.688695   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-195702 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (18.936665066s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-195702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-195702
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-195702: (2.281361066s)
--- PASS: TestKubernetesUpgrade (323.67s)

                                                
                                    
x
+
TestMissingContainerUpgrade (105.03s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2870105597 start -p missing-upgrade-072261 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2870105597 start -p missing-upgrade-072261 --memory=2200 --driver=docker  --container-runtime=docker: (34.20668507s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-072261
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-072261: (10.449078526s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-072261
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-072261 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0920 18:38:51.403016   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-072261 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (57.638504372s)
helpers_test.go:175: Cleaning up "missing-upgrade-072261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-072261
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-072261: (2.16708824s)
--- PASS: TestMissingContainerUpgrade (105.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestPause/serial/Start (76.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-438945 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
I0920 18:35:29.984626   87188 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 18:35:29.984736   87188 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0920 18:35:30.024775   87188 install.go:62] docker-machine-driver-kvm2: exit status 1
W0920 18:35:30.025116   87188 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 18:35:30.025174   87188 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate806621081/001/docker-machine-driver-kvm2
I0920 18:35:30.419612   87188 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate806621081/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000599880 gz:0xc000599888 tar:0xc000599820 tar.bz2:0xc000599830 tar.gz:0xc000599840 tar.xz:0xc000599850 tar.zst:0xc000599860 tbz2:0xc000599830 tgz:0xc000599840 txz:0xc000599850 tzst:0xc000599860 xz:0xc000599890 zip:0xc0005998a0 zst:0xc000599898] Getters:map[file:0xc001c74cc0 http:0xc00014f6d0 https:0xc00014f720] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0920 18:35:30.419670   87188 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate806621081/001/docker-machine-driver-kvm2
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-438945 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m16.855104383s)
--- PASS: TestPause/serial/Start (76.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1625050869 start -p stopped-upgrade-409858 --memory=2200 --vm-driver=docker  --container-runtime=docker
I0920 18:35:32.673179   87188 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 18:35:32.673269   87188 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0920 18:35:32.708809   87188 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0920 18:35:32.708851   87188 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0920 18:35:32.708926   87188 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0920 18:35:32.708954   87188 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate806621081/002/docker-machine-driver-kvm2
I0920 18:35:32.767182   87188 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate806621081/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc000599880 gz:0xc000599888 tar:0xc000599820 tar.bz2:0xc000599830 tar.gz:0xc000599840 tar.xz:0xc000599850 tar.zst:0xc000599860 tbz2:0xc000599830 tgz:0xc000599840 txz:0xc000599850 tzst:0xc000599860 xz:0xc000599890 zip:0xc0005998a0 zst:0xc000599898] Getters:map[file:0xc001a8aef0 http:0xc001b688c0 https:0xc001b68910] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response cod
e: 404. trying to get the common version
I0920 18:35:32.767248   87188 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate806621081/002/docker-machine-driver-kvm2
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1625050869 start -p stopped-upgrade-409858 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m24.941304588s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1625050869 -p stopped-upgrade-409858 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1625050869 -p stopped-upgrade-409858 stop: (10.88663428s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-409858 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-409858 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (23.959546985s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.69s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-438945 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-438945 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.677042748s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.69s)

                                                
                                    
x
+
TestPause/serial/Pause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-438945 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.62s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-438945 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-438945 --output=json --layout=cluster: exit status 2 (434.661686ms)

                                                
                                                
-- stdout --
	{"Name":"pause-438945","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-438945","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-438945 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.66s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-438945 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.66s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.13s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-438945 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-438945 --alsologtostderr -v=5: (2.127478177s)
--- PASS: TestPause/serial/DeletePaused (2.13s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-438945
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-438945: exit status 1 (21.110785ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-438945: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-409858
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-409858: (1.117247246s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875633 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-875633 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (64.177989ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-875633] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-80428/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-80428/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875633 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875633 --driver=docker  --container-runtime=docker: (24.912868636s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-875633 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (127.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-971556 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-971556 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m7.28570295s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (127.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875633 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875633 --no-kubernetes --driver=docker  --container-runtime=docker: (14.629146121s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-875633 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-875633 status -o json: exit status 2 (285.918735ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-875633","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-875633
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-875633: (1.697605012s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875633 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875633 --no-kubernetes --driver=docker  --container-runtime=docker: (6.026890177s)
--- PASS: TestNoKubernetes/serial/Start (6.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-875633 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-875633 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.758684ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (43.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.139138734s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0920 18:40:06.828755   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:06.835145   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:06.846543   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:06.867930   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:06.909304   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:06.990707   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:07.152156   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:07.473834   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:08.115455   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:40:09.397699   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (29.944359176s)
--- PASS: TestNoKubernetes/serial/ProfileList (43.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-875633
E0920 18:40:27.322944   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-875633: (1.221823962s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-875633 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-875633 --driver=docker  --container-runtime=docker: (8.058696227s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (44.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-944084 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-944084 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (44.386074362s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (44.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-875633 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-875633 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.18383ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-389184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 18:40:47.804232   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-389184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m7.830713532s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-944084 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [61379515-b399-4165-823b-0aa86632392c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [61379515-b399-4165-823b-0aa86632392c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003397398s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-944084 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-971556 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e48bc351-bdea-4dca-98d5-9b534b667a43] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e48bc351-bdea-4dca-98d5-9b534b667a43] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.015317713s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-971556 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-944084 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-944084 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-944084 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-944084 --alsologtostderr -v=3: (10.726849683s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-971556 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-971556 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-971556 --alsologtostderr -v=3
E0920 18:41:28.766579   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-971556 --alsologtostderr -v=3: (10.815848094s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944084 -n no-preload-944084
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944084 -n no-preload-944084: exit status 7 (107.492813ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-944084 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-944084 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-944084 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m59.953248915s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-944084 -n no-preload-944084
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-971556 -n old-k8s-version-971556
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-971556 -n old-k8s-version-971556: exit status 7 (61.97942ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-971556 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (125.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-971556 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-971556 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m5.027623496s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-971556 -n old-k8s-version-971556
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (125.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-389184 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [85959e01-c312-49b5-82a0-95bb0334b3d1] Pending
helpers_test.go:344: "busybox" [85959e01-c312-49b5-82a0-95bb0334b3d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [85959e01-c312-49b5-82a0-95bb0334b3d1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004478541s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-389184 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-389184 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-389184 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-389184 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-389184 --alsologtostderr -v=3: (10.895298312s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-389184 -n embed-certs-389184
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-389184 -n embed-certs-389184: exit status 7 (102.555849ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-389184 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-389184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-389184 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m22.260541734s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-389184 -n embed-certs-389184
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-468856 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0920 18:43:25.386901   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-468856 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m0.171027845s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-r988p" [f07c2926-6cce-4e89-a7d1-43328df6d888] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004370761s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-r988p" [f07c2926-6cce-4e89-a7d1-43328df6d888] Running
E0920 18:43:51.403292   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004396928s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-971556 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-468856 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7cbefb8e-297b-4262-a975-f68bf99c31c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7cbefb8e-297b-4262-a975-f68bf99c31c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003783292s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-468856 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-971556 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-971556 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-971556 -n old-k8s-version-971556
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-971556 -n old-k8s-version-971556: exit status 2 (278.382706ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-971556 -n old-k8s-version-971556
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-971556 -n old-k8s-version-971556: exit status 2 (284.741968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-971556 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-971556 -n old-k8s-version-971556
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-971556 -n old-k8s-version-971556
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-906904 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-906904 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (28.759716176s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-468856 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-468856 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-468856 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-468856 --alsologtostderr -v=3: (10.793333158s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-468856 -n default-k8s-diff-port-468856
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-468856 -n default-k8s-diff-port-468856: exit status 7 (110.499398ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-468856 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-468856 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-468856 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.417894141s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-468856 -n default-k8s-diff-port-468856
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-906904 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-906904 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-906904 --alsologtostderr -v=3: (9.791297313s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-906904 -n newest-cni-906904
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-906904 -n newest-cni-906904: exit status 7 (100.516499ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-906904 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-906904 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-906904 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (14.64820503s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-906904 -n newest-cni-906904
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-906904 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-906904 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-906904 -n newest-cni-906904
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-906904 -n newest-cni-906904: exit status 2 (282.50117ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-906904 -n newest-cni-906904
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-906904 -n newest-cni-906904: exit status 2 (276.159944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-906904 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-906904 -n newest-cni-906904
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-906904 -n newest-cni-906904
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (66.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0920 18:45:06.828962   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:45:22.319712   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/addons-535596/client.crt: no such file or directory" logger="UnhandledError"
E0920 18:45:34.530062   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/skaffold-716249/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m6.477999345s)
--- PASS: TestNetworkPlugins/group/auto/Start (66.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-481902 "pgrep -a kubelet"
I0920 18:46:06.877207   87188 config.go:182] Loaded profile config "auto-481902": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-481902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c6ptl" [cf95fdd2-a2f3-408c-af7f-45a61d9201f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c6ptl" [cf95fdd2-a2f3-408c-af7f-45a61d9201f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003849674s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-481902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-597np" [2226bb6f-93cc-48a5-ab17-a6f735c74f86] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003780728s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (39.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (39.669368089s)
--- PASS: TestNetworkPlugins/group/flannel/Start (39.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m2h4n" [3d193ff0-670a-48e7-b462-c8277b4c8c4d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003102936s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-597np" [2226bb6f-93cc-48a5-ab17-a6f735c74f86] Running
E0920 18:46:39.145897   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/old-k8s-version-971556/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003919512s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-389184 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-389184 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-389184 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-389184 -n embed-certs-389184
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-389184 -n embed-certs-389184: exit status 2 (274.113114ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-389184 -n embed-certs-389184
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-389184 -n embed-certs-389184: exit status 2 (274.113243ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-389184 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-389184 -n embed-certs-389184
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-389184 -n embed-certs-389184
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m2h4n" [3d193ff0-670a-48e7-b462-c8277b4c8c4d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004141422s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-944084 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (67.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m7.798628367s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (67.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-944084 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-944084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-944084 -n no-preload-944084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-944084 -n no-preload-944084: exit status 2 (297.232982ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-944084 -n no-preload-944084
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-944084 -n no-preload-944084: exit status 2 (308.663701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-944084 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-944084 -n no-preload-944084
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-944084 -n no-preload-944084
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0920 18:46:59.627868   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/old-k8s-version-971556/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m7.567329803s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gm6n5" [bd0c54b2-e126-45df-867e-7c77cb919b9a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004315134s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-481902 "pgrep -a kubelet"
I0920 18:47:20.121942   87188 config.go:182] Loaded profile config "flannel-481902": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-481902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dssmb" [28442c8d-aaf5-4fe9-bae7-82fca710a74f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dssmb" [28442c8d-aaf5-4fe9-bae7-82fca710a74f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004142414s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-481902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (42.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (42.204661099s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (42.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-481902 "pgrep -a kubelet"
I0920 18:47:55.020357   87188 config.go:182] Loaded profile config "enable-default-cni-481902": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-481902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-856sr" [c9c09036-6ab1-4d5d-a6f9-9c22b19e4c18] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-856sr" [c9c09036-6ab1-4d5d-a6f9-9c22b19e4c18] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003779487s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-481902 "pgrep -a kubelet"
I0920 18:48:02.495909   87188 config.go:182] Loaded profile config "bridge-481902": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-481902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8bm9k" [94679015-6a9d-4563-bbdb-cbf753965896] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8bm9k" [94679015-6a9d-4563-bbdb-cbf753965896] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.005740402s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-481902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-481902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m7.962092629s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-481902 "pgrep -a kubelet"
I0920 18:48:30.566413   87188 config.go:182] Loaded profile config "kubenet-481902": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-481902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4hbqg" [c6363f29-0c18-49e3-8b99-049ef7196b28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4hbqg" [c6363f29-0c18-49e3-8b99-049ef7196b28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003604737s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (62.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m2.48611875s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (62.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5bnmj" [2b9a3374-3e3a-442e-a9af-40054bdc70ba] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003211159s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-481902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5bnmj" [2b9a3374-3e3a-442e-a9af-40054bdc70ba] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00452251s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-468856 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-468856 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-468856 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-468856 -n default-k8s-diff-port-468856
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-468856 -n default-k8s-diff-port-468856: exit status 2 (326.749088ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-468856 -n default-k8s-diff-port-468856
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-468856 -n default-k8s-diff-port-468856: exit status 2 (330.553539ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-468856 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-468856 -n default-k8s-diff-port-468856
E0920 18:48:51.402719   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/functional-831303/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-468856 -n default-k8s-diff-port-468856
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (50.155658982s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (65.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0920 18:49:02.511154   87188 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/old-k8s-version-971556/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-481902 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m5.248953986s)
--- PASS: TestNetworkPlugins/group/false/Start (65.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tphgh" [1c232817-2d27-415b-af4b-e24dd49d6023] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00502772s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-481902 "pgrep -a kubelet"
I0920 18:49:37.655155   87188 config.go:182] Loaded profile config "calico-481902": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-481902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zzmfx" [73f88fe7-827a-4c70-8a79-2cb82fe8784c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zzmfx" [73f88fe7-827a-4c70-8a79-2cb82fe8784c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00394076s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-kqs9z" [5206c095-4ed7-4fc9-bb57-8fff4bd54bf6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004183109s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-481902 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-481902 "pgrep -a kubelet"
I0920 18:49:44.987567   87188 config.go:182] Loaded profile config "kindnet-481902": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-481902 replace --force -f testdata/netcat-deployment.yaml
I0920 18:49:45.045052   87188 config.go:182] Loaded profile config "custom-flannel-481902": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ptglr" [b62af3b3-932b-4a24-ab30-5919af2a1eb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ptglr" [b62af3b3-932b-4a24-ab30-5919af2a1eb7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003574177s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-481902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6x9f4" [38039295-16fc-4606-82b8-a1ae07601e96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6x9f4" [38039295-16fc-4606-82b8-a1ae07601e96] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003697871s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-481902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-481902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-481902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-481902 "pgrep -a kubelet"
I0920 18:50:07.936714   87188 config.go:182] Loaded profile config "false-481902": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-481902 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zdzvx" [4ab07c2e-8339-41c7-9ddb-3b733d9669d1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zdzvx" [4ab07c2e-8339-41c7-9ddb-3b733d9669d1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.00370654s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-481902 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-481902 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.10s)

                                                
                                    

Test skip (20/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-932220" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-932220
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-481902 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-481902" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19678-80428/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 18:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-836313
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19678-80428/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 18:38:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-195702
contexts:
- context:
cluster: cert-expiration-836313
extensions:
- extension:
last-update: Fri, 20 Sep 2024 18:37:09 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: cert-expiration-836313
name: cert-expiration-836313
- context:
cluster: kubernetes-upgrade-195702
user: kubernetes-upgrade-195702
name: kubernetes-upgrade-195702
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-836313
user:
client-certificate: /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/cert-expiration-836313/client.crt
client-key: /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/cert-expiration-836313/client.key
- name: kubernetes-upgrade-195702
user:
client-certificate: /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/kubernetes-upgrade-195702/client.crt
client-key: /home/jenkins/minikube-integration/19678-80428/.minikube/profiles/kubernetes-upgrade-195702/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-481902

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-481902" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-481902"

                                                
                                                
----------------------- debugLogs end: cilium-481902 [took: 3.014563485s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-481902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-481902
--- SKIP: TestNetworkPlugins/group/cilium (3.16s)

                                                
                                    
Copied to clipboard