Test Report: Docker_Linux 19696

                    
                      60137f5eb61dd17472aeb1c9d9b63bd7ae7f04e6:2024-09-24:36347
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 73.49
x
+
TestAddons/parallel/Registry (73.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.68415ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-p8v2z" [09f0475c-4746-427a-ab8c-9c11b2ee2bfa] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003957094s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sgxhg" [d8940b72-00d5-4d8d-94d1-657f7a3dfea2] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004582816s
addons_test.go:338: (dbg) Run:  kubectl --context addons-537454 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-537454 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-537454 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.079599637s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-537454 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 ip
2024/09/23 23:51:36 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-537454
helpers_test.go:235: (dbg) docker inspect addons-537454:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f8cd81e81c6fc2d3372a4d56a953020fc19b9aac2c4b104c9f9547250b0f142c",
	        "Created": "2024-09-23T23:38:30.953236622Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 16354,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T23:38:31.096294991Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fba5f082b59effd6acfcb1eed3d3f86a23bd3a65463877f8197a730d49f52a09",
	        "ResolvConfPath": "/var/lib/docker/containers/f8cd81e81c6fc2d3372a4d56a953020fc19b9aac2c4b104c9f9547250b0f142c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f8cd81e81c6fc2d3372a4d56a953020fc19b9aac2c4b104c9f9547250b0f142c/hostname",
	        "HostsPath": "/var/lib/docker/containers/f8cd81e81c6fc2d3372a4d56a953020fc19b9aac2c4b104c9f9547250b0f142c/hosts",
	        "LogPath": "/var/lib/docker/containers/f8cd81e81c6fc2d3372a4d56a953020fc19b9aac2c4b104c9f9547250b0f142c/f8cd81e81c6fc2d3372a4d56a953020fc19b9aac2c4b104c9f9547250b0f142c-json.log",
	        "Name": "/addons-537454",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-537454:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-537454",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/448b9488ae0d066c7c5f109b161ce376be6f09dcf855c64b85e59560dfc47529-init/diff:/var/lib/docker/overlay2/fb91fcac56c4c868a1a8ed5f0f010197833c519445998b9f134a5286f2fd7eff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/448b9488ae0d066c7c5f109b161ce376be6f09dcf855c64b85e59560dfc47529/merged",
	                "UpperDir": "/var/lib/docker/overlay2/448b9488ae0d066c7c5f109b161ce376be6f09dcf855c64b85e59560dfc47529/diff",
	                "WorkDir": "/var/lib/docker/overlay2/448b9488ae0d066c7c5f109b161ce376be6f09dcf855c64b85e59560dfc47529/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-537454",
	                "Source": "/var/lib/docker/volumes/addons-537454/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-537454",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-537454",
	                "name.minikube.sigs.k8s.io": "addons-537454",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "98ae0726ce7a11aec2f000175fa2cce7ba7ef3992bae3a2177d3ba1924597736",
	            "SandboxKey": "/var/run/docker/netns/98ae0726ce7a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-537454": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1e9e57f8932d03da376634ab9e58ee4cf63393a725b87413dabfa136b08e4f7a",
	                    "EndpointID": "9337d5b2515275a277e73f92be463fad1eab58ba27ee1fb150082aed74eb27fe",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-537454",
	                        "f8cd81e81c6f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-537454 -n addons-537454
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-467922 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | download-docker-467922                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-467922                                                                   | download-docker-467922 | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-675850   | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | binary-mirror-675850                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43631                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-675850                                                                     | binary-mirror-675850   | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | addons-537454                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC |                     |
	|         | addons-537454                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-537454 --wait=true                                                                | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:38 UTC | 23 Sep 24 23:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-537454 addons disable                                                                | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:42 UTC | 23 Sep 24 23:42 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | -p addons-537454                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | -p addons-537454                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | addons-537454                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-537454 ssh cat                                                                       | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | /opt/local-path-provisioner/pvc-b1079101-08ea-46c0-97ce-99eeccde2570_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-537454 addons disable                                                                | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:51 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-537454 addons disable                                                                | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-537454 addons                                                                        | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-537454 addons disable                                                                | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:50 UTC | 23 Sep 24 23:50 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
	|         | addons-537454                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-537454 ssh curl -s                                                                   | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-537454 ip                                                                            | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
	| addons  | addons-537454 addons disable                                                                | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-537454 addons disable                                                                | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-537454 addons                                                                        | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-537454 addons                                                                        | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-537454 ip                                                                            | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
	| addons  | addons-537454 addons disable                                                                | addons-537454          | jenkins | v1.34.0 | 23 Sep 24 23:51 UTC | 23 Sep 24 23:51 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:38:08
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:38:08.995810   15595 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:38:08.996079   15595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:08.996097   15595 out.go:358] Setting ErrFile to fd 2...
	I0923 23:38:08.996104   15595 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:38:08.996572   15595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
	I0923 23:38:08.997262   15595 out.go:352] Setting JSON to false
	I0923 23:38:08.998113   15595 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1233,"bootTime":1727133456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:38:08.998205   15595 start.go:139] virtualization: kvm guest
	I0923 23:38:09.000683   15595 out.go:177] * [addons-537454] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:38:09.002075   15595 notify.go:220] Checking for updates...
	I0923 23:38:09.002088   15595 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:38:09.003718   15595 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:38:09.005412   15595 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	I0923 23:38:09.006913   15595 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	I0923 23:38:09.008163   15595 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:38:09.009312   15595 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:38:09.010638   15595 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:38:09.031569   15595 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 23:38:09.031681   15595 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 23:38:09.077110   15595 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 23:38:09.068790421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 23:38:09.077201   15595 docker.go:318] overlay module found
	I0923 23:38:09.079229   15595 out.go:177] * Using the docker driver based on user configuration
	I0923 23:38:09.080763   15595 start.go:297] selected driver: docker
	I0923 23:38:09.080781   15595 start.go:901] validating driver "docker" against <nil>
	I0923 23:38:09.080792   15595 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:38:09.081522   15595 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 23:38:09.125721   15595 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 23:38:09.117139708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 23:38:09.125886   15595 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:38:09.126142   15595 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:38:09.128389   15595 out.go:177] * Using Docker driver with root privileges
	I0923 23:38:09.129939   15595 cni.go:84] Creating CNI manager for ""
	I0923 23:38:09.130009   15595 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 23:38:09.130044   15595 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 23:38:09.130118   15595 start.go:340] cluster config:
	{Name:addons-537454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-537454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:09.131714   15595 out.go:177] * Starting "addons-537454" primary control-plane node in "addons-537454" cluster
	I0923 23:38:09.132919   15595 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 23:38:09.134508   15595 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0923 23:38:09.136053   15595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 23:38:09.136094   15595 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 23:38:09.136097   15595 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0923 23:38:09.136104   15595 cache.go:56] Caching tarball of preloaded images
	I0923 23:38:09.136279   15595 preload.go:172] Found /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0923 23:38:09.136291   15595 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0923 23:38:09.136617   15595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/config.json ...
	I0923 23:38:09.136639   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/config.json: {Name:mk04d998fd9b0118232851783173183517fc8ad2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:09.152125   15595 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0923 23:38:09.152229   15595 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0923 23:38:09.152244   15595 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0923 23:38:09.152248   15595 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0923 23:38:09.152257   15595 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0923 23:38:09.152262   15595 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0923 23:38:21.441816   15595 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0923 23:38:21.441855   15595 cache.go:194] Successfully downloaded all kic artifacts
	I0923 23:38:21.441909   15595 start.go:360] acquireMachinesLock for addons-537454: {Name:mka6c41e784cb00dcb0e41a385660b7b8df6a65e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 23:38:21.442031   15595 start.go:364] duration metric: took 99.481µs to acquireMachinesLock for "addons-537454"
	I0923 23:38:21.442071   15595 start.go:93] Provisioning new machine with config: &{Name:addons-537454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-537454 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 23:38:21.442152   15595 start.go:125] createHost starting for "" (driver="docker")
	I0923 23:38:21.445186   15595 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 23:38:21.445414   15595 start.go:159] libmachine.API.Create for "addons-537454" (driver="docker")
	I0923 23:38:21.445452   15595 client.go:168] LocalClient.Create starting
	I0923 23:38:21.445577   15595 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19696-7438/.minikube/certs/ca.pem
	I0923 23:38:21.567638   15595 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19696-7438/.minikube/certs/cert.pem
	I0923 23:38:21.821793   15595 cli_runner.go:164] Run: docker network inspect addons-537454 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 23:38:21.837527   15595 cli_runner.go:211] docker network inspect addons-537454 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 23:38:21.837598   15595 network_create.go:284] running [docker network inspect addons-537454] to gather additional debugging logs...
	I0923 23:38:21.837618   15595 cli_runner.go:164] Run: docker network inspect addons-537454
	W0923 23:38:21.852902   15595 cli_runner.go:211] docker network inspect addons-537454 returned with exit code 1
	I0923 23:38:21.852933   15595 network_create.go:287] error running [docker network inspect addons-537454]: docker network inspect addons-537454: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-537454 not found
	I0923 23:38:21.852945   15595 network_create.go:289] output of [docker network inspect addons-537454]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-537454 not found
	
	** /stderr **
	I0923 23:38:21.853022   15595 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 23:38:21.869029   15595 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019f6790}
	I0923 23:38:21.869073   15595 network_create.go:124] attempt to create docker network addons-537454 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 23:38:21.869120   15595 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-537454 addons-537454
	I0923 23:38:21.927892   15595 network_create.go:108] docker network addons-537454 192.168.49.0/24 created
	I0923 23:38:21.927920   15595 kic.go:121] calculated static IP "192.168.49.2" for the "addons-537454" container
	I0923 23:38:21.927980   15595 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 23:38:21.944031   15595 cli_runner.go:164] Run: docker volume create addons-537454 --label name.minikube.sigs.k8s.io=addons-537454 --label created_by.minikube.sigs.k8s.io=true
	I0923 23:38:21.960539   15595 oci.go:103] Successfully created a docker volume addons-537454
	I0923 23:38:21.960609   15595 cli_runner.go:164] Run: docker run --rm --name addons-537454-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-537454 --entrypoint /usr/bin/test -v addons-537454:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0923 23:38:26.865572   15595 cli_runner.go:217] Completed: docker run --rm --name addons-537454-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-537454 --entrypoint /usr/bin/test -v addons-537454:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (4.904931075s)
	I0923 23:38:26.865595   15595 oci.go:107] Successfully prepared a docker volume addons-537454
	I0923 23:38:26.865623   15595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 23:38:26.865641   15595 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 23:38:26.865699   15595 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-537454:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 23:38:30.893439   15595 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-537454:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (4.027692759s)
	I0923 23:38:30.893470   15595 kic.go:203] duration metric: took 4.027824476s to extract preloaded images to volume ...
	W0923 23:38:30.893599   15595 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 23:38:30.893711   15595 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 23:38:30.938320   15595 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-537454 --name addons-537454 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-537454 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-537454 --network addons-537454 --ip 192.168.49.2 --volume addons-537454:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0923 23:38:31.251026   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Running}}
	I0923 23:38:31.268758   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:31.287395   15595 cli_runner.go:164] Run: docker exec addons-537454 stat /var/lib/dpkg/alternatives/iptables
	I0923 23:38:31.327405   15595 oci.go:144] the created container "addons-537454" has a running status.
	I0923 23:38:31.327436   15595 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa...
	I0923 23:38:31.474411   15595 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 23:38:31.494348   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:31.512464   15595 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 23:38:31.512484   15595 kic_runner.go:114] Args: [docker exec --privileged addons-537454 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 23:38:31.562647   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:31.589347   15595 machine.go:93] provisionDockerMachine start ...
	I0923 23:38:31.589432   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:31.606247   15595 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:31.606475   15595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 23:38:31.606490   15595 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 23:38:31.607022   15595 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46604->127.0.0.1:32768: read: connection reset by peer
	I0923 23:38:34.717425   15595 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-537454
	
	I0923 23:38:34.717455   15595 ubuntu.go:169] provisioning hostname "addons-537454"
	I0923 23:38:34.717517   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:34.734993   15595 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:34.735177   15595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 23:38:34.735195   15595 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-537454 && echo "addons-537454" | sudo tee /etc/hostname
	I0923 23:38:34.856484   15595 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-537454
	
	I0923 23:38:34.856562   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:34.874764   15595 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:34.874925   15595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 23:38:34.874941   15595 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-537454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-537454/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-537454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 23:38:34.985984   15595 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 23:38:34.986029   15595 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19696-7438/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-7438/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-7438/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-7438/.minikube}
	I0923 23:38:34.986069   15595 ubuntu.go:177] setting up certificates
	I0923 23:38:34.986080   15595 provision.go:84] configureAuth start
	I0923 23:38:34.986127   15595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-537454
	I0923 23:38:35.002662   15595 provision.go:143] copyHostCerts
	I0923 23:38:35.002733   15595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7438/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-7438/.minikube/ca.pem (1078 bytes)
	I0923 23:38:35.002837   15595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7438/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-7438/.minikube/cert.pem (1123 bytes)
	I0923 23:38:35.002894   15595 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-7438/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-7438/.minikube/key.pem (1679 bytes)
	I0923 23:38:35.002941   15595 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-7438/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-7438/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-7438/.minikube/certs/ca-key.pem org=jenkins.addons-537454 san=[127.0.0.1 192.168.49.2 addons-537454 localhost minikube]
	I0923 23:38:35.293595   15595 provision.go:177] copyRemoteCerts
	I0923 23:38:35.293656   15595 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 23:38:35.293688   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:35.309749   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:35.393981   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0923 23:38:35.414973   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 23:38:35.435675   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0923 23:38:35.455896   15595 provision.go:87] duration metric: took 469.802631ms to configureAuth
	I0923 23:38:35.455925   15595 ubuntu.go:193] setting minikube options for container-runtime
	I0923 23:38:35.456070   15595 config.go:182] Loaded profile config "addons-537454": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:38:35.456114   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:35.472385   15595 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:35.472574   15595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 23:38:35.472586   15595 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0923 23:38:35.586313   15595 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0923 23:38:35.586333   15595 ubuntu.go:71] root file system type: overlay
	I0923 23:38:35.586443   15595 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0923 23:38:35.586491   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:35.602945   15595 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:35.603146   15595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 23:38:35.603215   15595 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0923 23:38:35.724717   15595 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0923 23:38:35.724792   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:35.741589   15595 main.go:141] libmachine: Using SSH client type: native
	I0923 23:38:35.741760   15595 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 23:38:35.741778   15595 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0923 23:38:36.407566   15595 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-20 11:39:29.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-23 23:38:35.719455766 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0923 23:38:36.407600   15595 machine.go:96] duration metric: took 4.81823344s to provisionDockerMachine
	I0923 23:38:36.407610   15595 client.go:171] duration metric: took 14.962149777s to LocalClient.Create
	I0923 23:38:36.407628   15595 start.go:167] duration metric: took 14.962214464s to libmachine.API.Create "addons-537454"
	I0923 23:38:36.407637   15595 start.go:293] postStartSetup for "addons-537454" (driver="docker")
	I0923 23:38:36.407650   15595 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 23:38:36.407732   15595 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 23:38:36.407764   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:36.424283   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:36.510706   15595 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 23:38:36.513720   15595 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 23:38:36.513751   15595 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 23:38:36.513759   15595 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 23:38:36.513766   15595 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 23:38:36.513777   15595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7438/.minikube/addons for local assets ...
	I0923 23:38:36.513840   15595 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-7438/.minikube/files for local assets ...
	I0923 23:38:36.513872   15595 start.go:296] duration metric: took 106.229016ms for postStartSetup
	I0923 23:38:36.514184   15595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-537454
	I0923 23:38:36.530116   15595 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/config.json ...
	I0923 23:38:36.530400   15595 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 23:38:36.530455   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:36.546467   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:36.626601   15595 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 23:38:36.630480   15595 start.go:128] duration metric: took 15.188313949s to createHost
	I0923 23:38:36.630509   15595 start.go:83] releasing machines lock for "addons-537454", held for 15.18846154s
	I0923 23:38:36.630565   15595 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-537454
	I0923 23:38:36.648124   15595 ssh_runner.go:195] Run: cat /version.json
	I0923 23:38:36.648170   15595 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 23:38:36.648190   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:36.648228   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:36.664614   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:36.664939   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:36.822729   15595 ssh_runner.go:195] Run: systemctl --version
	I0923 23:38:36.826477   15595 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 23:38:36.830315   15595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0923 23:38:36.852306   15595 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0923 23:38:36.852361   15595 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 23:38:36.877157   15595 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 23:38:36.877183   15595 start.go:495] detecting cgroup driver to use...
	I0923 23:38:36.877210   15595 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 23:38:36.877300   15595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:38:36.891501   15595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0923 23:38:36.900493   15595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0923 23:38:36.909279   15595 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0923 23:38:36.909326   15595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0923 23:38:36.918168   15595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 23:38:36.926480   15595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0923 23:38:36.934718   15595 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0923 23:38:36.943076   15595 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 23:38:36.950862   15595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0923 23:38:36.959503   15595 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0923 23:38:36.968008   15595 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0923 23:38:36.976604   15595 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 23:38:36.984128   15595 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 23:38:36.984179   15595 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 23:38:36.996656   15595 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 23:38:37.004260   15595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:37.077533   15595 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0923 23:38:37.160362   15595 start.go:495] detecting cgroup driver to use...
	I0923 23:38:37.160414   15595 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 23:38:37.160462   15595 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0923 23:38:37.170961   15595 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0923 23:38:37.171022   15595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0923 23:38:37.181535   15595 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 23:38:37.196521   15595 ssh_runner.go:195] Run: which cri-dockerd
	I0923 23:38:37.199626   15595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0923 23:38:37.208771   15595 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0923 23:38:37.225462   15595 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0923 23:38:37.306218   15595 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0923 23:38:37.404429   15595 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0923 23:38:37.404566   15595 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0923 23:38:37.420989   15595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:37.501358   15595 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0923 23:38:37.751036   15595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0923 23:38:37.761236   15595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 23:38:37.771311   15595 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0923 23:38:37.845312   15595 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0923 23:38:37.918501   15595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:37.989092   15595 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0923 23:38:38.000702   15595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0923 23:38:38.010566   15595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:38.083368   15595 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0923 23:38:38.142779   15595 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0923 23:38:38.142864   15595 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0923 23:38:38.146256   15595 start.go:563] Will wait 60s for crictl version
	I0923 23:38:38.146303   15595 ssh_runner.go:195] Run: which crictl
	I0923 23:38:38.149454   15595 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 23:38:38.179723   15595 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0923 23:38:38.179781   15595 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 23:38:38.203320   15595 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0923 23:38:38.228732   15595 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0923 23:38:38.228825   15595 cli_runner.go:164] Run: docker network inspect addons-537454 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 23:38:38.244693   15595 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 23:38:38.248038   15595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:38:38.257622   15595 kubeadm.go:883] updating cluster {Name:addons-537454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-537454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 23:38:38.257731   15595 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 23:38:38.257780   15595 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 23:38:38.275699   15595 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 23:38:38.275717   15595 docker.go:615] Images already preloaded, skipping extraction
	I0923 23:38:38.275770   15595 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0923 23:38:38.292934   15595 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0923 23:38:38.292956   15595 cache_images.go:84] Images are preloaded, skipping loading
	I0923 23:38:38.292965   15595 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0923 23:38:38.293046   15595 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-537454 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-537454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 23:38:38.293092   15595 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0923 23:38:38.334993   15595 cni.go:84] Creating CNI manager for ""
	I0923 23:38:38.335020   15595 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 23:38:38.335033   15595 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 23:38:38.335074   15595 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-537454 NodeName:addons-537454 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 23:38:38.335237   15595 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-537454"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 23:38:38.335296   15595 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 23:38:38.343205   15595 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 23:38:38.343270   15595 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 23:38:38.350803   15595 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0923 23:38:38.367309   15595 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 23:38:38.383596   15595 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0923 23:38:38.399803   15595 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 23:38:38.403057   15595 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 23:38:38.412586   15595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:38.482361   15595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:38:38.494625   15595 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454 for IP: 192.168.49.2
	I0923 23:38:38.494661   15595 certs.go:194] generating shared ca certs ...
	I0923 23:38:38.494680   15595 certs.go:226] acquiring lock for ca certs: {Name:mk58861dc8405e290d4d335b8c9b6a3834f35c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:38.494812   15595 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-7438/.minikube/ca.key
	I0923 23:38:38.610163   15595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7438/.minikube/ca.crt ...
	I0923 23:38:38.610194   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/ca.crt: {Name:mkf268b0a48f9c264926defce7bd0880fb79a0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:38.610380   15595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7438/.minikube/ca.key ...
	I0923 23:38:38.610396   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/ca.key: {Name:mkf63085087d7f21d63e7b725611d77ba52c58cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:38.610495   15595 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-7438/.minikube/proxy-client-ca.key
	I0923 23:38:38.861326   15595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7438/.minikube/proxy-client-ca.crt ...
	I0923 23:38:38.861356   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/proxy-client-ca.crt: {Name:mk5563797d9e118756d610c9c474e542c5f95ae3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:38.861542   15595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7438/.minikube/proxy-client-ca.key ...
	I0923 23:38:38.861560   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/proxy-client-ca.key: {Name:mk0a36d4ba9cddbd1c133e59b16db15c43826cbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:38.861655   15595 certs.go:256] generating profile certs ...
	I0923 23:38:38.861716   15595 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.key
	I0923 23:38:38.861733   15595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt with IP's: []
	I0923 23:38:38.947794   15595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt ...
	I0923 23:38:38.947824   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: {Name:mk4d93ff6847f8fc2a964fb2d90da0addb546724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:38.948008   15595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.key ...
	I0923 23:38:38.948025   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.key: {Name:mk6c07bf2215f6125b8fd821f618eb81f3123b5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:38.948129   15595 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.key.67cf54a3
	I0923 23:38:38.948156   15595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.crt.67cf54a3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 23:38:39.047518   15595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.crt.67cf54a3 ...
	I0923 23:38:39.047547   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.crt.67cf54a3: {Name:mkb644dbea835d1512d92ea5f1e3bdcbffb469fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:39.047737   15595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.key.67cf54a3 ...
	I0923 23:38:39.047753   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.key.67cf54a3: {Name:mk8b10b1c0027890185ed1480433d1edd79d81f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:39.047852   15595 certs.go:381] copying /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.crt.67cf54a3 -> /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.crt
	I0923 23:38:39.047950   15595 certs.go:385] copying /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.key.67cf54a3 -> /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.key
	I0923 23:38:39.048022   15595 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/proxy-client.key
	I0923 23:38:39.048043   15595 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/proxy-client.crt with IP's: []
	I0923 23:38:39.177399   15595 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/proxy-client.crt ...
	I0923 23:38:39.177428   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/proxy-client.crt: {Name:mk4c2bcd8e29ba40ef0d702e456f6cc12643c76f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:39.177628   15595 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/proxy-client.key ...
	I0923 23:38:39.177644   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/proxy-client.key: {Name:mk0b805c6a5a96a200e2ad8b91700b46d18926a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:39.177862   15595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7438/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 23:38:39.177906   15595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7438/.minikube/certs/ca.pem (1078 bytes)
	I0923 23:38:39.177941   15595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7438/.minikube/certs/cert.pem (1123 bytes)
	I0923 23:38:39.177970   15595 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-7438/.minikube/certs/key.pem (1679 bytes)
	I0923 23:38:39.178543   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 23:38:39.200192   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0923 23:38:39.221630   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 23:38:39.242137   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0923 23:38:39.262770   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 23:38:39.282782   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0923 23:38:39.303856   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 23:38:39.325077   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 23:38:39.345894   15595 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-7438/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 23:38:39.366438   15595 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 23:38:39.381402   15595 ssh_runner.go:195] Run: openssl version
	I0923 23:38:39.386293   15595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 23:38:39.394476   15595 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:39.397718   15595 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 23:38 /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:39.397763   15595 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 23:38:39.403797   15595 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 23:38:39.411602   15595 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 23:38:39.414427   15595 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 23:38:39.414474   15595 kubeadm.go:392] StartCluster: {Name:addons-537454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-537454 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:38:39.414590   15595 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0923 23:38:39.430453   15595 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 23:38:39.438469   15595 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 23:38:39.445842   15595 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 23:38:39.445884   15595 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 23:38:39.453168   15595 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 23:38:39.453184   15595 kubeadm.go:157] found existing configuration files:
	
	I0923 23:38:39.453231   15595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 23:38:39.460729   15595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 23:38:39.460784   15595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 23:38:39.467988   15595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 23:38:39.475134   15595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 23:38:39.475175   15595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 23:38:39.482288   15595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 23:38:39.489550   15595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 23:38:39.489596   15595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 23:38:39.496704   15595 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 23:38:39.504259   15595 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 23:38:39.504310   15595 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 23:38:39.511455   15595 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 23:38:39.543128   15595 kubeadm.go:310] W0923 23:38:39.542530    1931 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:38:39.543745   15595 kubeadm.go:310] W0923 23:38:39.543119    1931 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 23:38:39.563767   15595 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0923 23:38:39.611664   15595 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 23:38:49.193112   15595 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 23:38:49.193164   15595 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 23:38:49.193230   15595 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 23:38:49.193322   15595 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0923 23:38:49.193373   15595 kubeadm.go:310] OS: Linux
	I0923 23:38:49.193426   15595 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 23:38:49.193492   15595 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 23:38:49.193557   15595 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 23:38:49.193625   15595 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 23:38:49.193706   15595 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 23:38:49.193768   15595 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 23:38:49.193834   15595 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 23:38:49.193895   15595 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 23:38:49.193977   15595 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 23:38:49.194083   15595 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 23:38:49.194201   15595 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 23:38:49.194338   15595 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 23:38:49.194522   15595 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 23:38:49.196296   15595 out.go:235]   - Generating certificates and keys ...
	I0923 23:38:49.196417   15595 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 23:38:49.196494   15595 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 23:38:49.196592   15595 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 23:38:49.196651   15595 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 23:38:49.196713   15595 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 23:38:49.196761   15595 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 23:38:49.196838   15595 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 23:38:49.196952   15595 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-537454 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 23:38:49.197000   15595 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 23:38:49.197102   15595 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-537454 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 23:38:49.197212   15595 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 23:38:49.197308   15595 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 23:38:49.197393   15595 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 23:38:49.197474   15595 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 23:38:49.197556   15595 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 23:38:49.197645   15595 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 23:38:49.197741   15595 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 23:38:49.197844   15595 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 23:38:49.197907   15595 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 23:38:49.197975   15595 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 23:38:49.198093   15595 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 23:38:49.199710   15595 out.go:235]   - Booting up control plane ...
	I0923 23:38:49.199806   15595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 23:38:49.199889   15595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 23:38:49.199971   15595 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 23:38:49.200097   15595 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 23:38:49.200210   15595 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 23:38:49.200277   15595 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 23:38:49.200431   15595 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 23:38:49.200578   15595 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 23:38:49.200668   15595 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.565517ms
	I0923 23:38:49.200765   15595 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 23:38:49.200850   15595 kubeadm.go:310] [api-check] The API server is healthy after 5.001445217s
	I0923 23:38:49.200957   15595 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 23:38:49.201113   15595 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 23:38:49.201177   15595 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 23:38:49.201348   15595 kubeadm.go:310] [mark-control-plane] Marking the node addons-537454 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 23:38:49.201441   15595 kubeadm.go:310] [bootstrap-token] Using token: elzlxh.9m0xj6jr8t1oj7h8
	I0923 23:38:49.203018   15595 out.go:235]   - Configuring RBAC rules ...
	I0923 23:38:49.203131   15595 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 23:38:49.203223   15595 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 23:38:49.203369   15595 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 23:38:49.203514   15595 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 23:38:49.203709   15595 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 23:38:49.203820   15595 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 23:38:49.203969   15595 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 23:38:49.204024   15595 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 23:38:49.204065   15595 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 23:38:49.204071   15595 kubeadm.go:310] 
	I0923 23:38:49.204128   15595 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 23:38:49.204136   15595 kubeadm.go:310] 
	I0923 23:38:49.204258   15595 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 23:38:49.204271   15595 kubeadm.go:310] 
	I0923 23:38:49.204305   15595 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 23:38:49.204395   15595 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 23:38:49.204468   15595 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 23:38:49.204479   15595 kubeadm.go:310] 
	I0923 23:38:49.204549   15595 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 23:38:49.204562   15595 kubeadm.go:310] 
	I0923 23:38:49.204627   15595 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 23:38:49.204635   15595 kubeadm.go:310] 
	I0923 23:38:49.204696   15595 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 23:38:49.204810   15595 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 23:38:49.204928   15595 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 23:38:49.204939   15595 kubeadm.go:310] 
	I0923 23:38:49.205036   15595 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 23:38:49.205143   15595 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 23:38:49.205158   15595 kubeadm.go:310] 
	I0923 23:38:49.205253   15595 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token elzlxh.9m0xj6jr8t1oj7h8 \
	I0923 23:38:49.205382   15595 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e055d3881aa058b5f446c68aa2091f6990cbec9617aed1214a4b54f45ad1753 \
	I0923 23:38:49.205409   15595 kubeadm.go:310] 	--control-plane 
	I0923 23:38:49.205413   15595 kubeadm.go:310] 
	I0923 23:38:49.205509   15595 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 23:38:49.205521   15595 kubeadm.go:310] 
	I0923 23:38:49.205621   15595 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token elzlxh.9m0xj6jr8t1oj7h8 \
	I0923 23:38:49.205749   15595 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6e055d3881aa058b5f446c68aa2091f6990cbec9617aed1214a4b54f45ad1753 
	I0923 23:38:49.205765   15595 cni.go:84] Creating CNI manager for ""
	I0923 23:38:49.205780   15595 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 23:38:49.207316   15595 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0923 23:38:49.208423   15595 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0923 23:38:49.216391   15595 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0923 23:38:49.232513   15595 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 23:38:49.232622   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:49.232682   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-537454 minikube.k8s.io/updated_at=2024_09_23T23_38_49_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=addons-537454 minikube.k8s.io/primary=true
	I0923 23:38:49.239281   15595 ops.go:34] apiserver oom_adj: -16
	I0923 23:38:49.299068   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:49.800132   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:50.299914   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:50.799865   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:51.299751   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:51.799883   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:52.299530   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:52.799946   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:53.299225   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:53.799600   15595 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 23:38:53.868910   15595 kubeadm.go:1113] duration metric: took 4.636320792s to wait for elevateKubeSystemPrivileges
	I0923 23:38:53.868944   15595 kubeadm.go:394] duration metric: took 14.45447138s to StartCluster
	I0923 23:38:53.868964   15595 settings.go:142] acquiring lock: {Name:mk63f6c8af11a909d6ec80320206b199474d1aa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:53.869092   15595 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-7438/kubeconfig
	I0923 23:38:53.869428   15595 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-7438/kubeconfig: {Name:mk3baf46f9789484e30e2455cc3982563b43a0a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 23:38:53.869620   15595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 23:38:53.869639   15595 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 23:38:53.869618   15595 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0923 23:38:53.869845   15595 config.go:182] Loaded profile config "addons-537454": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:38:53.869833   15595 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-537454"
	I0923 23:38:53.869909   15595 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-537454"
	I0923 23:38:53.869942   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.869743   15595 addons.go:69] Setting ingress-dns=true in profile "addons-537454"
	I0923 23:38:53.870027   15595 addons.go:234] Setting addon ingress-dns=true in "addons-537454"
	I0923 23:38:53.870070   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.869743   15595 addons.go:69] Setting ingress=true in profile "addons-537454"
	I0923 23:38:53.870143   15595 addons.go:234] Setting addon ingress=true in "addons-537454"
	I0923 23:38:53.870191   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.869751   15595 addons.go:69] Setting inspektor-gadget=true in profile "addons-537454"
	I0923 23:38:53.870271   15595 addons.go:234] Setting addon inspektor-gadget=true in "addons-537454"
	I0923 23:38:53.870311   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.869754   15595 addons.go:69] Setting default-storageclass=true in profile "addons-537454"
	I0923 23:38:53.870354   15595 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-537454"
	I0923 23:38:53.869758   15595 addons.go:69] Setting metrics-server=true in profile "addons-537454"
	I0923 23:38:53.870493   15595 addons.go:234] Setting addon metrics-server=true in "addons-537454"
	I0923 23:38:53.870552   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.870580   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.870643   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.870648   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.870552   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.870810   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.871178   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.869770   15595 addons.go:69] Setting gcp-auth=true in profile "addons-537454"
	I0923 23:38:53.871356   15595 mustload.go:65] Loading cluster: addons-537454
	I0923 23:38:53.869777   15595 addons.go:69] Setting volcano=true in profile "addons-537454"
	I0923 23:38:53.871537   15595 addons.go:234] Setting addon volcano=true in "addons-537454"
	I0923 23:38:53.871547   15595 config.go:182] Loaded profile config "addons-537454": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:38:53.871570   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.869780   15595 addons.go:69] Setting cloud-spanner=true in profile "addons-537454"
	I0923 23:38:53.871736   15595 addons.go:234] Setting addon cloud-spanner=true in "addons-537454"
	I0923 23:38:53.871761   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.871789   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.872036   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.872292   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.872576   15595 out.go:177] * Verifying Kubernetes components...
	I0923 23:38:53.874841   15595 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 23:38:53.869787   15595 addons.go:69] Setting volumesnapshots=true in profile "addons-537454"
	I0923 23:38:53.875257   15595 addons.go:234] Setting addon volumesnapshots=true in "addons-537454"
	I0923 23:38:53.875345   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.876083   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.869791   15595 addons.go:69] Setting registry=true in profile "addons-537454"
	I0923 23:38:53.879199   15595 addons.go:234] Setting addon registry=true in "addons-537454"
	I0923 23:38:53.879270   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.879690   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.869785   15595 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-537454"
	I0923 23:38:53.882087   15595 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-537454"
	I0923 23:38:53.869805   15595 addons.go:69] Setting storage-provisioner=true in profile "addons-537454"
	I0923 23:38:53.869733   15595 addons.go:69] Setting yakd=true in profile "addons-537454"
	I0923 23:38:53.869769   15595 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-537454"
	I0923 23:38:53.883244   15595 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-537454"
	I0923 23:38:53.883456   15595 addons.go:234] Setting addon storage-provisioner=true in "addons-537454"
	I0923 23:38:53.883520   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.883658   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.883998   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.884278   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.884293   15595 addons.go:234] Setting addon yakd=true in "addons-537454"
	I0923 23:38:53.885097   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.911695   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.913704   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.922714   15595 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0923 23:38:53.924176   15595 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0923 23:38:53.924261   15595 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 23:38:53.926669   15595 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 23:38:53.926723   15595 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:38:53.926908   15595 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 23:38:53.926927   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 23:38:53.927000   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:53.926751   15595 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0923 23:38:53.926700   15595 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 23:38:53.934585   15595 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 23:38:53.934608   15595 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 23:38:53.934677   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:53.936037   15595 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 23:38:53.936064   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0923 23:38:53.936069   15595 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:38:53.936084   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 23:38:53.936128   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:53.936218   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:53.936629   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.937425   15595 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 23:38:53.937776   15595 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:38:53.938942   15595 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 23:38:53.938983   15595 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 23:38:53.939094   15595 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 23:38:53.939114   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 23:38:53.939176   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:53.939803   15595 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-537454"
	I0923 23:38:53.939847   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.939922   15595 addons.go:234] Setting addon default-storageclass=true in "addons-537454"
	I0923 23:38:53.939958   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:38:53.940211   15595 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 23:38:53.940227   15595 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 23:38:53.940277   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:53.940288   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.941409   15595 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 23:38:53.941429   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 23:38:53.941474   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:53.941946   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:38:53.968839   15595 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 23:38:53.970090   15595 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 23:38:53.971563   15595 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 23:38:53.971582   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 23:38:53.971636   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:53.982388   15595 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 23:38:53.982411   15595 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 23:38:53.982452   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:53.989150   15595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 23:38:53.991393   15595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 23:38:53.995597   15595 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 23:38:53.996870   15595 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 23:38:53.998229   15595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 23:38:53.999811   15595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 23:38:54.001387   15595 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 23:38:54.001501   15595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 23:38:54.002767   15595 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 23:38:54.002907   15595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 23:38:54.002974   15595 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 23:38:54.003035   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:54.004241   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.005667   15595 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 23:38:54.005970   15595 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 23:38:54.006083   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:54.005877   15595 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 23:38:54.006428   15595 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0923 23:38:54.007281   15595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 23:38:54.007873   15595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 23:38:54.007933   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:54.008868   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.008981   15595 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 23:38:54.009061   15595 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:38:54.009072   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 23:38:54.009118   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:54.011354   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.012592   15595 out.go:177]   - Using image docker.io/busybox:stable
	I0923 23:38:54.013854   15595 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:38:54.013871   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 23:38:54.013918   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:38:54.018712   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.020104   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.020553   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.022052   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.026034   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.026378   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.032764   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.043499   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.047639   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.049667   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	W0923 23:38:54.052098   15595 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 23:38:54.052126   15595 retry.go:31] will retry after 210.294205ms: ssh: handshake failed: EOF
	I0923 23:38:54.052422   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:38:54.140181   15595 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 23:38:54.149188   15595 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 23:38:54.443811   15595 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 23:38:54.443890   15595 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 23:38:54.453927   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 23:38:54.536027   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 23:38:54.540940   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 23:38:54.542368   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 23:38:54.543541   15595 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 23:38:54.543588   15595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 23:38:54.549698   15595 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 23:38:54.549724   15595 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 23:38:54.552184   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0923 23:38:54.635717   15595 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 23:38:54.635745   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 23:38:54.652964   15595 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 23:38:54.653052   15595 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 23:38:54.734797   15595 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 23:38:54.734906   15595 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 23:38:54.735744   15595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 23:38:54.735768   15595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 23:38:54.748018   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 23:38:54.837037   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 23:38:54.849466   15595 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 23:38:54.849547   15595 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 23:38:54.853613   15595 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:38:54.853683   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 23:38:55.036300   15595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 23:38:55.036339   15595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 23:38:55.036903   15595 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 23:38:55.036969   15595 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 23:38:55.141650   15595 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 23:38:55.141736   15595 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 23:38:55.144172   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 23:38:55.145769   15595 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 23:38:55.145827   15595 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 23:38:55.334722   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 23:38:55.435089   15595 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:38:55.435173   15595 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 23:38:55.440926   15595 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 23:38:55.441005   15595 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 23:38:55.454803   15595 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 23:38:55.454885   15595 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 23:38:55.538836   15595 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 23:38:55.538864   15595 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 23:38:55.548021   15595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 23:38:55.548099   15595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 23:38:55.740046   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 23:38:55.752159   15595 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:38:55.752254   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 23:38:56.045103   15595 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 23:38:56.045179   15595 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 23:38:56.150480   15595 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 23:38:56.150568   15595 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 23:38:56.152041   15595 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 23:38:56.152108   15595 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 23:38:56.240313   15595 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.091088475s)
	I0923 23:38:56.240505   15595 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.100284347s)
	I0923 23:38:56.240645   15595 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 23:38:56.242854   15595 node_ready.go:35] waiting up to 6m0s for node "addons-537454" to be "Ready" ...
	I0923 23:38:56.252219   15595 node_ready.go:49] node "addons-537454" has status "Ready":"True"
	I0923 23:38:56.252251   15595 node_ready.go:38] duration metric: took 9.321906ms for node "addons-537454" to be "Ready" ...
	I0923 23:38:56.252262   15595 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:38:56.261367   15595 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bgmvd" in "kube-system" namespace to be "Ready" ...
	I0923 23:38:56.338181   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 23:38:56.436012   15595 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 23:38:56.436117   15595 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 23:38:56.537080   15595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 23:38:56.537169   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 23:38:56.736419   15595 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 23:38:56.736449   15595 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 23:38:56.746050   15595 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-537454" context rescaled to 1 replicas
	I0923 23:38:56.839458   15595 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:38:56.839491   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 23:38:57.046026   15595 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:38:57.046110   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 23:38:57.158347   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:38:57.245161   15595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 23:38:57.245243   15595 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 23:38:57.556492   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 23:38:57.744408   15595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 23:38:57.744443   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 23:38:58.351108   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-bgmvd" in "kube-system" namespace has status "Ready":"False"
	I0923 23:38:58.352047   15595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 23:38:58.352082   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 23:38:58.643054   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.189016367s)
	I0923 23:38:58.747687   15595 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:38:58.747741   15595 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 23:38:59.039803   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 23:39:00.843778   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-bgmvd" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:00.951026   15595 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 23:39:00.951154   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:39:00.970323   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:39:01.556581   15595 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 23:39:01.836934   15595 addons.go:234] Setting addon gcp-auth=true in "addons-537454"
	I0923 23:39:01.836996   15595 host.go:66] Checking if "addons-537454" exists ...
	I0923 23:39:01.837585   15595 cli_runner.go:164] Run: docker container inspect addons-537454 --format={{.State.Status}}
	I0923 23:39:01.864289   15595 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 23:39:01.864349   15595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-537454
	I0923 23:39:01.880958   15595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/addons-537454/id_rsa Username:docker}
	I0923 23:39:02.936540   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-bgmvd" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:02.952895   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.416828777s)
	I0923 23:39:02.952937   15595 addons.go:475] Verifying addon ingress=true in "addons-537454"
	I0923 23:39:02.953320   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.412344736s)
	I0923 23:39:02.954494   15595 out.go:177] * Verifying ingress addon...
	I0923 23:39:02.957126   15595 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 23:39:03.043845   15595 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 23:39:03.043877   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:03.538242   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:03.963320   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:04.544239   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:05.037732   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:05.437427   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-bgmvd" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:05.542112   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:05.635382   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.092936724s)
	I0923 23:39:05.635514   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.887463293s)
	I0923 23:39:05.635558   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.79848295s)
	I0923 23:39:05.635620   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.491362822s)
	I0923 23:39:05.635664   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.300906282s)
	I0923 23:39:05.636792   15595 addons.go:475] Verifying addon registry=true in "addons-537454"
	I0923 23:39:05.635750   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.89567593s)
	I0923 23:39:05.637037   15595 addons.go:475] Verifying addon metrics-server=true in "addons-537454"
	I0923 23:39:05.635810   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.297545822s)
	I0923 23:39:05.635949   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.477568237s)
	W0923 23:39:05.637296   15595 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:05.637353   15595 retry.go:31] will retry after 180.704076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 23:39:05.636041   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.079510871s)
	I0923 23:39:05.636246   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.084001261s)
	I0923 23:39:05.639291   15595 out.go:177] * Verifying registry addon...
	I0923 23:39:05.639491   15595 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-537454 service yakd-dashboard -n yakd-dashboard
	
	I0923 23:39:05.643557   15595 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 23:39:05.648763   15595 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 23:39:05.648856   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0923 23:39:05.650682   15595 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0923 23:39:05.818382   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 23:39:06.038769   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:06.148052   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:06.462374   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:06.647781   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:06.951966   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.912046518s)
	I0923 23:39:06.952007   15595 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-537454"
	I0923 23:39:06.952394   15595 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.087918297s)
	I0923 23:39:06.953598   15595 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 23:39:06.953727   15595 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 23:39:06.960812   15595 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 23:39:06.962053   15595 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 23:39:06.963403   15595 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 23:39:06.963445   15595 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 23:39:06.965597   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:06.965869   15595 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 23:39:06.965887   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:07.057314   15595 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 23:39:07.057338   15595 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 23:39:07.077432   15595 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:07.077453   15595 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 23:39:07.148026   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:07.155871   15595 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 23:39:07.461542   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:07.465045   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:07.648669   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:07.841345   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-bgmvd" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:07.961851   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:07.965741   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:08.148867   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:08.539010   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.720578156s)
	I0923 23:39:08.539435   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:08.540565   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:08.647996   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:08.744476   15595 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.588556346s)
	I0923 23:39:08.746763   15595 addons.go:475] Verifying addon gcp-auth=true in "addons-537454"
	I0923 23:39:08.748467   15595 out.go:177] * Verifying gcp-auth addon...
	I0923 23:39:08.750744   15595 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 23:39:08.753231   15595 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 23:39:08.767250   15595 pod_ready.go:98] pod "coredns-7c65d6cfc9-bgmvd" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:53 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:53 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:53 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:53 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-23 23:38:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:38:56 +0000 UTC,FinishedAt:2024-09-23 23:39:07 +0000 UTC,ContainerID:docker://25e81925959c9ce1698eac61e46384caa4d84d2b25cbd30245868b671c9a5431,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://25e81925959c9ce1698eac61e46384caa4d84d2b25cbd30245868b671c9a5431 Started:0xc001ac0e50 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00087c380} {Name:kube-api-access-ktb7j MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc00087c390}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 23:39:08.767282   15595 pod_ready.go:82] duration metric: took 12.505884015s for pod "coredns-7c65d6cfc9-bgmvd" in "kube-system" namespace to be "Ready" ...
	E0923 23:39:08.767298   15595 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-bgmvd" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:39:08 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:53 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:53 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:53 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-23 23:38:53 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.3 PodIPs:[{IP:10.244.0.3}] StartTime:2024-09-23 23:38:53 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-23 23:38:56 +0000 UTC,FinishedAt:2024-09-23 23:39:07 +0000 UTC,ContainerID:docker://25e81925959c9ce1698eac61e46384caa4d84d2b25cbd30245868b671c9a5431,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://25e81925959c9ce1698eac61e46384caa4d84d2b25cbd30245868b671c9a5431 Started:0xc001ac0e50 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc00087c380} {Name:kube-api-access-ktb7j MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc00087c390}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0923 23:39:08.767310   15595 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:08.961373   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:08.964819   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:09.147804   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:09.461791   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:09.465463   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:09.647899   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:09.962180   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:09.964583   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:10.148056   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:10.461461   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:10.465160   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:10.647334   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:10.773439   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:10.961021   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:10.964218   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:11.148386   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:11.462385   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:11.464611   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:11.647091   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:11.961282   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:11.964932   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:12.162088   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:12.478212   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:12.478650   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:12.646691   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:12.786108   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:12.961300   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:12.964120   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:13.147400   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:13.461724   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:13.464180   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:13.647376   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:13.961609   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:13.965264   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:14.146829   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:14.461165   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:14.464304   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:14.646524   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:14.961655   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:14.965074   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:15.147453   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.273728   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:15.460651   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:15.465445   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:15.646839   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:15.961321   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:15.964515   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:16.146357   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.462175   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:16.464305   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:16.647206   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:16.961609   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:16.965366   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:17.147375   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:17.462114   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:17.464551   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:17.648167   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:17.773437   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:17.961079   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:18.116181   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.147376   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.461722   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:18.464935   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:18.647628   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:18.961076   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:18.964054   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.147261   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.461206   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:19.464341   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:19.647756   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:19.961168   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:19.964430   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.147542   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.272341   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:20.461464   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:20.465245   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:20.647844   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:20.960802   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:20.964218   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.148182   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.461787   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:21.465720   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:21.647588   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:21.961268   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:21.964419   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.146614   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.272802   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:22.462204   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:22.464861   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:22.647059   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:22.961683   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:22.965194   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.147268   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.460339   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:23.465180   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:23.647600   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:23.961615   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:23.965435   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.147829   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.461970   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:24.464301   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:24.648028   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:24.773096   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:24.961926   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:24.964962   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.146983   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.461152   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:25.464346   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:25.647473   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:25.961420   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:25.964485   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.146653   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.461150   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:26.464511   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:26.646932   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:26.773169   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:26.961789   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:26.965436   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:27.147039   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:27.461487   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:27.464328   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:27.646965   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:27.960595   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:27.963876   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.148901   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:28.461008   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:28.464208   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:28.648173   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:28.773196   15595 pod_ready.go:103] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"False"
	I0923 23:39:28.968799   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:28.969064   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.147708   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:29.460448   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:29.464843   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:29.646910   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:29.960617   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:29.964016   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.147227   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:30.462566   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:30.467267   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:30.647369   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:30.772406   15595 pod_ready.go:93] pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:30.772427   15595 pod_ready.go:82] duration metric: took 22.005104043s for pod "coredns-7c65d6cfc9-l5wfh" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.772435   15595 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-537454" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.776387   15595 pod_ready.go:93] pod "etcd-addons-537454" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:30.776421   15595 pod_ready.go:82] duration metric: took 3.980891ms for pod "etcd-addons-537454" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.776430   15595 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-537454" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.780315   15595 pod_ready.go:93] pod "kube-apiserver-addons-537454" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:30.780337   15595 pod_ready.go:82] duration metric: took 3.900725ms for pod "kube-apiserver-addons-537454" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.780346   15595 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-537454" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.783993   15595 pod_ready.go:93] pod "kube-controller-manager-addons-537454" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:30.784011   15595 pod_ready.go:82] duration metric: took 3.659539ms for pod "kube-controller-manager-addons-537454" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.784030   15595 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w5fqz" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.787543   15595 pod_ready.go:93] pod "kube-proxy-w5fqz" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:30.787563   15595 pod_ready.go:82] duration metric: took 3.526895ms for pod "kube-proxy-w5fqz" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.787574   15595 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-537454" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:30.961060   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:30.964084   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.147597   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:31.171004   15595 pod_ready.go:93] pod "kube-scheduler-addons-537454" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:31.171027   15595 pod_ready.go:82] duration metric: took 383.446047ms for pod "kube-scheduler-addons-537454" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:31.171037   15595 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-27tsr" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:31.460805   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:31.464341   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.571135   15595 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-27tsr" in "kube-system" namespace has status "Ready":"True"
	I0923 23:39:31.571161   15595 pod_ready.go:82] duration metric: took 400.116122ms for pod "nvidia-device-plugin-daemonset-27tsr" in "kube-system" namespace to be "Ready" ...
	I0923 23:39:31.571172   15595 pod_ready.go:39] duration metric: took 35.318896415s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 23:39:31.571194   15595 api_server.go:52] waiting for apiserver process to appear ...
	I0923 23:39:31.571254   15595 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:39:31.584965   15595 api_server.go:72] duration metric: took 37.715245669s to wait for apiserver process to appear ...
	I0923 23:39:31.584987   15595 api_server.go:88] waiting for apiserver healthz status ...
	I0923 23:39:31.585005   15595 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 23:39:31.589058   15595 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 23:39:31.589828   15595 api_server.go:141] control plane version: v1.31.1
	I0923 23:39:31.589847   15595 api_server.go:131] duration metric: took 4.855068ms to wait for apiserver health ...
	I0923 23:39:31.589856   15595 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 23:39:31.647021   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:31.775156   15595 system_pods.go:59] 17 kube-system pods found
	I0923 23:39:31.775188   15595 system_pods.go:61] "coredns-7c65d6cfc9-l5wfh" [07d4e905-d56d-4489-9f8d-2146badb4bc7] Running
	I0923 23:39:31.775197   15595 system_pods.go:61] "csi-hostpath-attacher-0" [8458a209-5151-43e9-b3d7-c21882bc4294] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:31.775204   15595 system_pods.go:61] "csi-hostpath-resizer-0" [e871c282-b270-4e14-8605-9c0b26891139] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:31.775212   15595 system_pods.go:61] "csi-hostpathplugin-8s578" [d0b6a826-6d85-40ea-99d5-cf65e30d5613] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:31.775216   15595 system_pods.go:61] "etcd-addons-537454" [ad649787-c104-4dda-832d-3ea3abd19296] Running
	I0923 23:39:31.775219   15595 system_pods.go:61] "kube-apiserver-addons-537454" [af8b28eb-6102-4021-8074-06a57acbddc8] Running
	I0923 23:39:31.775223   15595 system_pods.go:61] "kube-controller-manager-addons-537454" [f406b9fa-72ab-44f3-a64b-fcd195a6deec] Running
	I0923 23:39:31.775227   15595 system_pods.go:61] "kube-ingress-dns-minikube" [d283c460-5a7d-4974-a26d-22a6a9f13460] Running
	I0923 23:39:31.775230   15595 system_pods.go:61] "kube-proxy-w5fqz" [10a56f73-ee42-4a77-8ea3-edd557532713] Running
	I0923 23:39:31.775233   15595 system_pods.go:61] "kube-scheduler-addons-537454" [fcbb854d-5aac-41b3-8dfb-50cd37111e5b] Running
	I0923 23:39:31.775240   15595 system_pods.go:61] "metrics-server-84c5f94fbc-vhxvb" [2b86c7cb-1ba3-4c6e-a24b-d7efc7056acf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:31.775244   15595 system_pods.go:61] "nvidia-device-plugin-daemonset-27tsr" [b325bd7a-05f1-473f-bbfb-9f57ff7e8bfd] Running
	I0923 23:39:31.775248   15595 system_pods.go:61] "registry-66c9cd494c-p8v2z" [09f0475c-4746-427a-ab8c-9c11b2ee2bfa] Running
	I0923 23:39:31.775253   15595 system_pods.go:61] "registry-proxy-sgxhg" [d8940b72-00d5-4d8d-94d1-657f7a3dfea2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:31.775259   15595 system_pods.go:61] "snapshot-controller-56fcc65765-cc8dd" [5f99f77f-2e6a-41b5-ba6f-03af5b5dc541] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:31.775267   15595 system_pods.go:61] "snapshot-controller-56fcc65765-wjv4x" [0df8ff41-a477-43f2-b65f-71742df438c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:31.775271   15595 system_pods.go:61] "storage-provisioner" [78920239-356a-4105-a15b-92c22dc608a3] Running
	I0923 23:39:31.775278   15595 system_pods.go:74] duration metric: took 185.416089ms to wait for pod list to return data ...
	I0923 23:39:31.775288   15595 default_sa.go:34] waiting for default service account to be created ...
	I0923 23:39:31.961366   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:31.964795   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:31.969883   15595 default_sa.go:45] found service account: "default"
	I0923 23:39:31.969908   15595 default_sa.go:55] duration metric: took 194.61243ms for default service account to be created ...
	I0923 23:39:31.969918   15595 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 23:39:32.147181   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:32.176526   15595 system_pods.go:86] 17 kube-system pods found
	I0923 23:39:32.176559   15595 system_pods.go:89] "coredns-7c65d6cfc9-l5wfh" [07d4e905-d56d-4489-9f8d-2146badb4bc7] Running
	I0923 23:39:32.176571   15595 system_pods.go:89] "csi-hostpath-attacher-0" [8458a209-5151-43e9-b3d7-c21882bc4294] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0923 23:39:32.176591   15595 system_pods.go:89] "csi-hostpath-resizer-0" [e871c282-b270-4e14-8605-9c0b26891139] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0923 23:39:32.176601   15595 system_pods.go:89] "csi-hostpathplugin-8s578" [d0b6a826-6d85-40ea-99d5-cf65e30d5613] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0923 23:39:32.176611   15595 system_pods.go:89] "etcd-addons-537454" [ad649787-c104-4dda-832d-3ea3abd19296] Running
	I0923 23:39:32.176618   15595 system_pods.go:89] "kube-apiserver-addons-537454" [af8b28eb-6102-4021-8074-06a57acbddc8] Running
	I0923 23:39:32.176628   15595 system_pods.go:89] "kube-controller-manager-addons-537454" [f406b9fa-72ab-44f3-a64b-fcd195a6deec] Running
	I0923 23:39:32.176634   15595 system_pods.go:89] "kube-ingress-dns-minikube" [d283c460-5a7d-4974-a26d-22a6a9f13460] Running
	I0923 23:39:32.176642   15595 system_pods.go:89] "kube-proxy-w5fqz" [10a56f73-ee42-4a77-8ea3-edd557532713] Running
	I0923 23:39:32.176648   15595 system_pods.go:89] "kube-scheduler-addons-537454" [fcbb854d-5aac-41b3-8dfb-50cd37111e5b] Running
	I0923 23:39:32.176659   15595 system_pods.go:89] "metrics-server-84c5f94fbc-vhxvb" [2b86c7cb-1ba3-4c6e-a24b-d7efc7056acf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0923 23:39:32.176666   15595 system_pods.go:89] "nvidia-device-plugin-daemonset-27tsr" [b325bd7a-05f1-473f-bbfb-9f57ff7e8bfd] Running
	I0923 23:39:32.176673   15595 system_pods.go:89] "registry-66c9cd494c-p8v2z" [09f0475c-4746-427a-ab8c-9c11b2ee2bfa] Running
	I0923 23:39:32.176682   15595 system_pods.go:89] "registry-proxy-sgxhg" [d8940b72-00d5-4d8d-94d1-657f7a3dfea2] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0923 23:39:32.176694   15595 system_pods.go:89] "snapshot-controller-56fcc65765-cc8dd" [5f99f77f-2e6a-41b5-ba6f-03af5b5dc541] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:32.176704   15595 system_pods.go:89] "snapshot-controller-56fcc65765-wjv4x" [0df8ff41-a477-43f2-b65f-71742df438c6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0923 23:39:32.176717   15595 system_pods.go:89] "storage-provisioner" [78920239-356a-4105-a15b-92c22dc608a3] Running
	I0923 23:39:32.176726   15595 system_pods.go:126] duration metric: took 206.801523ms to wait for k8s-apps to be running ...
	I0923 23:39:32.176735   15595 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 23:39:32.176788   15595 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 23:39:32.191870   15595 system_svc.go:56] duration metric: took 15.127291ms WaitForService to wait for kubelet
	I0923 23:39:32.191903   15595 kubeadm.go:582] duration metric: took 38.32218416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 23:39:32.191926   15595 node_conditions.go:102] verifying NodePressure condition ...
	I0923 23:39:32.370927   15595 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 23:39:32.370963   15595 node_conditions.go:123] node cpu capacity is 8
	I0923 23:39:32.370980   15595 node_conditions.go:105] duration metric: took 179.047406ms to run NodePressure ...
	I0923 23:39:32.370995   15595 start.go:241] waiting for startup goroutines ...
	I0923 23:39:32.461895   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:32.465367   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:32.647680   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:32.961453   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:32.964973   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.147275   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:33.461115   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:33.464606   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:33.648139   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:33.961203   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:33.964756   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.147110   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:34.460839   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:34.464396   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:34.647596   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:34.984306   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:34.984699   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.147193   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:35.461263   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:35.464802   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:35.647265   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 23:39:35.961567   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:35.964641   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.147369   15595 kapi.go:107] duration metric: took 30.503811344s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 23:39:36.462002   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:36.465179   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:36.960981   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:36.964473   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.461016   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:37.464005   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:37.961491   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:37.964991   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.462296   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:38.464377   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:38.961200   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:38.964408   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.461049   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:39.463968   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:39.961648   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:39.963800   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.460867   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:40.466201   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:40.961592   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:40.965232   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.464029   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:41.466391   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:41.960833   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:41.964470   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.461140   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:42.464962   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:42.961621   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:42.965256   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.461421   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:43.464948   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:43.961387   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:43.964219   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.461352   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:44.465051   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:44.960937   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:44.964051   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.461327   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:45.464861   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:45.997260   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:45.997288   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.462281   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:46.464371   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:46.961159   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:46.963920   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:47.461847   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.465575   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:47.961123   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:47.964587   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:48.462118   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:48.464513   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:49.058080   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:49.058132   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:49.461721   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:49.463718   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:49.961580   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:49.964719   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:50.461137   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:50.465066   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:50.962171   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:50.964429   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:51.460687   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:51.464158   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:51.961753   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:51.964536   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:52.461704   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:52.465047   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:52.961516   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:52.964633   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:53.461820   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:53.465509   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:53.961947   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:53.963720   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:54.461954   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:54.465628   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:54.962636   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:55.062865   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:55.461267   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:55.464922   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:55.961374   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:56.062099   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:56.461569   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:56.465407   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:56.961609   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:56.964650   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:57.461521   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:57.465165   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:57.960665   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:57.964512   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:58.460929   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:58.464328   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:58.961048   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:58.964072   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:59.460710   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:59.463973   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:39:59.962168   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:39:59.964654   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:00.461410   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:00.465409   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:00.960374   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:00.965897   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:01.462042   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:01.464462   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:01.961000   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:01.964195   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:02.462758   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:02.464881   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:02.961519   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:02.964381   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 23:40:03.461221   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:03.464652   15595 kapi.go:107] duration metric: took 56.503836288s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 23:40:03.961029   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.462061   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:04.961189   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:05.461465   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:05.960753   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:06.461396   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:06.960952   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:07.462825   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:07.962655   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:08.461488   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:08.961769   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:09.462225   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:09.961962   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:10.461577   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:10.961681   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:11.461097   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:11.962311   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:12.461384   15595 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 23:40:12.960687   15595 kapi.go:107] duration metric: took 1m10.003701138s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 23:40:32.254176   15595 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 23:40:32.254197   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:32.754380   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:33.253844   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:33.753840   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.254321   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:34.754511   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:35.254556   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:35.754839   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:36.254207   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:36.754198   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:37.253842   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:37.753862   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:38.253755   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:38.753783   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:39.253406   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:39.753949   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:40.254100   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:40.754286   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:41.253994   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:41.754306   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:42.254125   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:42.753727   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:43.253884   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:43.754130   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:44.254376   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:44.754097   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:45.253950   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:45.753685   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:46.253304   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:46.754235   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:47.254037   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:47.753978   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:48.254358   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:48.754404   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:49.253879   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:49.753831   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:50.253810   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:50.753250   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:51.254192   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:51.754123   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:52.254099   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:52.753647   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:53.253697   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:53.753523   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:54.253513   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:54.754577   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:55.254725   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:55.753665   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:56.254354   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:56.753981   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:57.254268   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:57.754423   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:58.254400   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:58.754088   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:59.254588   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:40:59.753450   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:00.254890   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:00.754282   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:01.254257   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:01.754778   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:02.253712   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:02.753614   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:03.255116   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:03.754338   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:04.254475   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:04.754703   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:05.254007   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:05.754006   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:06.254189   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:06.753757   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:07.254067   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:07.754361   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:08.253802   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:08.753775   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:09.254085   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:09.753948   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:10.254174   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:10.753648   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:11.254091   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:11.753666   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:12.254838   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:12.753580   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:13.253738   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:13.753500   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:14.253438   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:14.754433   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:15.254463   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:15.754786   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:16.253835   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:16.753936   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:17.254168   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:17.754395   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:18.253792   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:18.753553   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:19.253859   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:19.753971   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:20.254072   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:20.753831   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:21.253986   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:21.754223   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:22.253851   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:22.753363   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:23.254036   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:23.753748   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:24.253717   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:24.753440   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:25.253648   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:25.753299   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:26.254357   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:26.754170   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:27.254224   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:27.754325   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:28.253829   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:28.753964   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:29.254346   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:29.754003   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:30.253883   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:30.753652   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:31.254809   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:31.753915   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:32.253742   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:32.753924   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:33.254067   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:33.753921   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:34.254010   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:34.753979   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:35.254079   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:35.753805   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:36.253755   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:36.753896   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:37.253664   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:37.754924   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:38.253742   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:38.753388   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:39.253827   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:39.753405   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:40.255172   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:40.753726   15595 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 23:41:41.253721   15595 kapi.go:107] duration metric: took 2m32.502974916s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 23:41:41.255624   15595 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-537454 cluster.
	I0923 23:41:41.257009   15595 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 23:41:41.258504   15595 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 23:41:41.259891   15595 out.go:177] * Enabled addons: ingress-dns, nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0923 23:41:41.261172   15595 addons.go:510] duration metric: took 2m47.391534602s for enable addons: enabled=[ingress-dns nvidia-device-plugin storage-provisioner cloud-spanner metrics-server inspektor-gadget volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0923 23:41:41.261217   15595 start.go:246] waiting for cluster config update ...
	I0923 23:41:41.261235   15595 start.go:255] writing updated cluster config ...
	I0923 23:41:41.261497   15595 ssh_runner.go:195] Run: rm -f paused
	I0923 23:41:41.311204   15595 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 23:41:41.314653   15595 out.go:177] * Done! kubectl is now configured to use "addons-537454" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 23 23:51:22 addons-537454 dockerd[1346]: time="2024-09-23T23:51:22.679542712Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=85a86a8f431bca46545c2018ec10cca856e36a4aba34b6e3b4930b8651003505 spanID=792421772fcb5c8a traceID=91ca3e1d98c2802397b2e0f33fd530e9
	Sep 23 23:51:22 addons-537454 dockerd[1346]: time="2024-09-23T23:51:22.732964953Z" level=info msg="ignoring event" container=85a86a8f431bca46545c2018ec10cca856e36a4aba34b6e3b4930b8651003505 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:22 addons-537454 dockerd[1346]: time="2024-09-23T23:51:22.874569582Z" level=info msg="ignoring event" container=033ba9e3ca2aa57b1887d0f789083bf990b4d75035e8e992e3d6e1f35cacee21 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:26 addons-537454 dockerd[1346]: time="2024-09-23T23:51:26.466936051Z" level=info msg="ignoring event" container=e499a6c5e28caa3884a6e546d879b7b156d726b54d0f08dc3c0d3a4e54c51601 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:26 addons-537454 dockerd[1346]: time="2024-09-23T23:51:26.579438558Z" level=info msg="ignoring event" container=98593fd5f2c0ae61472a775d5e898a1bf21526f3bcbbf219669b0dff1187e56f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.237420235Z" level=info msg="ignoring event" container=85fc9d33073545bcd3b184a527a7fede1c7972efc74a85f333b98751eed285e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.254769550Z" level=info msg="ignoring event" container=a8887090bc6418a5ec7f60c01bb1d843a4ec4f897e3ff2e2937214e66948c830 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.256874423Z" level=info msg="ignoring event" container=82c82d6e6e8174e0e561225107e2f3297785d4c72872f5d270c171e017bbabf6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.257961604Z" level=info msg="ignoring event" container=c9689b4c043e9387c6745931267047360223078a0df6a7a3a37269b432e13acc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.341636937Z" level=info msg="ignoring event" container=4672b54c2819b2544a408779f1df24810f7237656e2285e979071565675e1d16 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.343507668Z" level=info msg="ignoring event" container=d862c5cc864803708d64a4e796e4bac8522052fd27bd131d35c9577af6e3fd9d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.350821499Z" level=info msg="ignoring event" container=e58231110c8ba288b20d139e19598be5f2f7558b7e922b8982f8e342e3acc21c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.354220795Z" level=info msg="ignoring event" container=efe2e64bc791a725c8b782985201bbff7058b5ef0e7173a4ca700b58add2242d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.568023569Z" level=info msg="ignoring event" container=1ba287b2210ff6fae1dc06a5796bb500b948dea308c6f1037897dd1c70c84df0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.626127546Z" level=info msg="ignoring event" container=b1e47629bb8180ac4ead7e3cc17b3d895d114a5bd94b95b9ee5ed6e7c53e9062 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:28 addons-537454 dockerd[1346]: time="2024-09-23T23:51:28.664405518Z" level=info msg="ignoring event" container=a254ae1ac08402bcfc3bdc7cb2b5c7e0fd2e8c3c8ba147b3eac519e0cba62b7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:34 addons-537454 dockerd[1346]: time="2024-09-23T23:51:34.674465934Z" level=info msg="ignoring event" container=0f0316ea955359ec40a645762474b52d688c46aaf8215d7dbe44df7c1f66a0fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:34 addons-537454 dockerd[1346]: time="2024-09-23T23:51:34.678244676Z" level=info msg="ignoring event" container=e29e6fe8be07ab8848183d3cf71d948552d6d215a2c56d6b13d9087df2c9614a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:34 addons-537454 dockerd[1346]: time="2024-09-23T23:51:34.860633464Z" level=info msg="ignoring event" container=d952194c78b7713acacad5f96d42e14851ee64ee7c8b18a6f0dbe24a9f31841a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:34 addons-537454 dockerd[1346]: time="2024-09-23T23:51:34.889368263Z" level=info msg="ignoring event" container=ccb2f06cd71f5da5a5ac543109218ad524345d7593e0690d01b9a0fb3f80cb73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:36 addons-537454 dockerd[1346]: time="2024-09-23T23:51:36.721859356Z" level=info msg="ignoring event" container=9a82296f56d7a72cb2c36ce4e505b22086637882f1a715d70ffbe44fe2d35622 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:37 addons-537454 dockerd[1346]: time="2024-09-23T23:51:37.240365010Z" level=info msg="ignoring event" container=4f40b57f6b1a1c843d88711144dc31f16c256afadfd1bf2cd63a22f50437d035 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:37 addons-537454 dockerd[1346]: time="2024-09-23T23:51:37.307047461Z" level=info msg="ignoring event" container=656ebda72ff9432698135abf334852baea2b72e14fe264f0d3bf1d403d929e3e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:37 addons-537454 dockerd[1346]: time="2024-09-23T23:51:37.379388035Z" level=info msg="ignoring event" container=ef7132836dfe9a035b8bce4db15ce988c95596b33079fbaed1697b3d5ba5ef97 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 23 23:51:37 addons-537454 dockerd[1346]: time="2024-09-23T23:51:37.457142829Z" level=info msg="ignoring event" container=cf5a733292c15cf6353e427b7097726193b2f263e02efb154bc78f930f7376fc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	825dd9cd607a4       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                                  18 seconds ago      Running             hello-world-app           0                   b925667843d20       hello-world-app-55bf9c44b4-hxqld
	ff312ca5299e5       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                27 seconds ago      Running             nginx                     0                   3738d89f1434b       nginx
	fe7d1da920dae       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                  0                   d048d14965565       gcp-auth-89d5ffd79-sn4fp
	723ddb5565cf4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                     0                   c008bd76ff9a7       ingress-nginx-admission-patch-fwp8f
	1ffec87f1c62e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                    0                   9a20ad9504515       ingress-nginx-admission-create-lxvbh
	9238e76982397       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner       0                   3a194ab11d86f       storage-provisioner
	13d69356305ac       c69fa2e9cbf5f                                                                                                                12 minutes ago      Running             coredns                   0                   bdd877fd48ecc       coredns-7c65d6cfc9-l5wfh
	e28401b0850d7       60c005f310ff3                                                                                                                12 minutes ago      Running             kube-proxy                0                   272937adbc68e       kube-proxy-w5fqz
	214dea821cc63       6bab7719df100                                                                                                                12 minutes ago      Running             kube-apiserver            0                   0b57718b2b775       kube-apiserver-addons-537454
	613e2fd9c4f5d       9aa1fad941575                                                                                                                12 minutes ago      Running             kube-scheduler            0                   af7a1bfefc0b4       kube-scheduler-addons-537454
	77e27a6dc34d8       2e96e5913fc06                                                                                                                12 minutes ago      Running             etcd                      0                   c5f237f76fa00       etcd-addons-537454
	75b86394e4b74       175ffd71cce3d                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   1edaa28796b1e       kube-controller-manager-addons-537454
	
	
	==> coredns [13d69356305a] <==
	Trace[2006944596]: [30.001035926s] [30.001035926s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2138929788]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (23-Sep-2024 23:38:57.242) (total time: 30001ms):
	Trace[2138929788]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (23:39:27.243)
	Trace[2138929788]: [30.001387709s] [30.001387709s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:52564 - 8495 "HINFO IN 3404165173979663276.7129219825450192272. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0132458s
	[INFO] 10.244.0.25:45918 - 3637 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000333237s
	[INFO] 10.244.0.25:55469 - 15757 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000388209s
	[INFO] 10.244.0.25:43830 - 35243 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131332s
	[INFO] 10.244.0.25:53945 - 55133 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000179248s
	[INFO] 10.244.0.25:56147 - 19272 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000108475s
	[INFO] 10.244.0.25:49011 - 10483 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136914s
	[INFO] 10.244.0.25:37598 - 24065 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.007720845s
	[INFO] 10.244.0.25:51008 - 61303 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00779622s
	[INFO] 10.244.0.25:60805 - 51838 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006169507s
	[INFO] 10.244.0.25:44763 - 20872 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006223396s
	[INFO] 10.244.0.25:54202 - 59327 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00575653s
	[INFO] 10.244.0.25:47636 - 39349 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005842141s
	[INFO] 10.244.0.25:46525 - 55796 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.001905433s
	[INFO] 10.244.0.25:38510 - 46855 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.002133653s
	
	
	==> describe nodes <==
	Name:               addons-537454
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-537454
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=addons-537454
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T23_38_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-537454
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 23:38:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-537454
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 23:51:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 23:51:25 +0000   Mon, 23 Sep 2024 23:38:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 23:51:25 +0000   Mon, 23 Sep 2024 23:38:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 23:51:25 +0000   Mon, 23 Sep 2024 23:38:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 23:51:25 +0000   Mon, 23 Sep 2024 23:38:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-537454
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859308Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce018e36496d43f399501e36aa172242
	  System UUID:                2d3f03d9-12b7-4f2b-9664-48514f5a5550
	  Boot ID:                    a73781eb-6090-4022-b8fa-71700564163d
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     hello-world-app-55bf9c44b4-hxqld         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  gcp-auth                    gcp-auth-89d5ffd79-sn4fp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-l5wfh                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-537454                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-537454             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-537454    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-w5fqz                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-537454             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   0 (0%)
	  memory             170Mi (0%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-537454 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-537454 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-537454 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-537454 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-537454 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-537454 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-537454 event: Registered Node addons-537454 in Controller
	  Normal   CIDRAssignmentFailed     12m                cidrAllocator    Node addons-537454 status is now: CIDRAssignmentFailed
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 07 9f 0e 16 24 08 06
	[  +2.176361] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba 16 0f f8 cf 02 08 06
	[  +2.416922] IPv4: martian source 10.244.0.1 from 10.244.0.16, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 5e 48 cc 8b a3 91 08 06
	[  +5.633099] IPv4: martian source 10.244.0.1 from 10.244.0.18, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 32 9f 84 eb 13 08 06
	[  +0.189682] IPv4: martian source 10.244.0.1 from 10.244.0.19, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9a 26 47 97 35 ef 08 06
	[  +0.245435] IPv4: martian source 10.244.0.1 from 10.244.0.20, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 43 7d bd 47 df 08 06
	[Sep23 23:40] IPv4: martian source 10.244.0.1 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 83 8d d8 a5 08 06
	[  +1.030717] IPv4: martian source 10.244.0.1 from 10.244.0.22, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be d1 e9 19 a3 6e 08 06
	[Sep23 23:41] IPv4: martian source 10.244.0.1 from 10.244.0.23, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee ee b6 2d 2f 59 08 06
	[  +0.115985] IPv4: martian source 10.244.0.1 from 10.244.0.24, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ee 01 3f c6 20 24 08 06
	[ +28.041573] IPv4: martian source 10.244.0.1 from 10.244.0.25, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a 5d 48 05 4b 7a 08 06
	[  +0.000470] IPv4: martian source 10.244.0.25 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3e d3 89 e0 82 16 08 06
	[Sep23 23:51] IPv4: martian source 10.244.0.34 from 10.244.0.21, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 16 d3 83 8d d8 a5 08 06
	
	
	==> etcd [77e27a6dc34d] <==
	{"level":"info","ts":"2024-09-23T23:38:44.539580Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-537454 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-23T23:38:44.539633Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:38:44.539694Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:38:44.539659Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-23T23:38:44.539861Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-23T23:38:44.539882Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-23T23:38:44.540359Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:38:44.540482Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:38:44.540504Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-23T23:38:44.540756Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:38:44.540870Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-23T23:38:44.541577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-23T23:38:44.541643Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-23T23:39:00.734603Z","caller":"traceutil/trace.go:171","msg":"trace[1857202686] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"177.823674ms","start":"2024-09-23T23:39:00.556762Z","end":"2024-09-23T23:39:00.734585Z","steps":["trace[1857202686] 'process raft request'  (duration: 177.716647ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:39:00.742352Z","caller":"traceutil/trace.go:171","msg":"trace[1501633919] linearizableReadLoop","detail":"{readStateIndex:614; appliedIndex:613; }","duration":"100.035142ms","start":"2024-09-23T23:39:00.642278Z","end":"2024-09-23T23:39:00.742313Z","steps":["trace[1501633919] 'read index received'  (duration: 92.759071ms)","trace[1501633919] 'applied index is now lower than readState.Index'  (duration: 7.275502ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T23:39:00.742543Z","caller":"traceutil/trace.go:171","msg":"trace[360450289] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"107.952562ms","start":"2024-09-23T23:39:00.634575Z","end":"2024-09-23T23:39:00.742528Z","steps":["trace[360450289] 'process raft request'  (duration: 107.604878ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:39:00.742664Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.366286ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:39:00.742743Z","caller":"traceutil/trace.go:171","msg":"trace[1440667244] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:604; }","duration":"100.457002ms","start":"2024-09-23T23:39:00.642271Z","end":"2024-09-23T23:39:00.742728Z","steps":["trace[1440667244] 'agreement among raft nodes before linearized reading'  (duration: 100.339669ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:39:03.440551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.274699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-23T23:39:03.440649Z","caller":"traceutil/trace.go:171","msg":"trace[724787856] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:741; }","duration":"101.3802ms","start":"2024-09-23T23:39:03.339254Z","end":"2024-09-23T23:39:03.440635Z","steps":["trace[724787856] 'agreement among raft nodes before linearized reading'  (duration: 101.246699ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T23:39:18.113923Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.741475ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T23:39:18.114029Z","caller":"traceutil/trace.go:171","msg":"trace[1055127792] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:997; }","duration":"151.825028ms","start":"2024-09-23T23:39:17.962155Z","end":"2024-09-23T23:39:18.113980Z","steps":["trace[1055127792] 'range keys from in-memory index tree'  (duration: 151.673319ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T23:48:44.557629Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1878}
	{"level":"info","ts":"2024-09-23T23:48:44.582309Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1878,"took":"24.11761ms","hash":1187761048,"current-db-size-bytes":9084928,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4907008,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-23T23:48:44.582356Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1187761048,"revision":1878,"compact-revision":-1}
	
	
	==> gcp-auth [fe7d1da920da] <==
	2024/09/23 23:42:23 Ready to write response ...
	2024/09/23 23:42:23 Ready to marshal response ...
	2024/09/23 23:42:23 Ready to write response ...
	2024/09/23 23:50:25 Ready to marshal response ...
	2024/09/23 23:50:25 Ready to write response ...
	2024/09/23 23:50:25 Ready to marshal response ...
	2024/09/23 23:50:25 Ready to write response ...
	2024/09/23 23:50:26 Ready to marshal response ...
	2024/09/23 23:50:26 Ready to write response ...
	2024/09/23 23:50:26 Ready to marshal response ...
	2024/09/23 23:50:26 Ready to write response ...
	2024/09/23 23:50:26 Ready to marshal response ...
	2024/09/23 23:50:26 Ready to write response ...
	2024/09/23 23:50:36 Ready to marshal response ...
	2024/09/23 23:50:36 Ready to write response ...
	2024/09/23 23:50:37 Ready to marshal response ...
	2024/09/23 23:50:37 Ready to write response ...
	2024/09/23 23:50:46 Ready to marshal response ...
	2024/09/23 23:50:46 Ready to write response ...
	2024/09/23 23:51:07 Ready to marshal response ...
	2024/09/23 23:51:07 Ready to write response ...
	2024/09/23 23:51:17 Ready to marshal response ...
	2024/09/23 23:51:17 Ready to write response ...
	2024/09/23 23:51:18 Ready to marshal response ...
	2024/09/23 23:51:18 Ready to write response ...
	
	
	==> kernel <==
	 23:51:38 up 34 min,  0 users,  load average: 1.02, 0.51, 0.42
	Linux addons-537454 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [214dea821cc6] <==
	W0923 23:42:15.451118       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0923 23:42:15.738129       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0923 23:50:26.103067       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.105.60.202"}
	E0923 23:50:53.252119       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 23:50:55.456525       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 23:51:01.673852       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 23:51:02.752424       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 23:51:07.148945       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 23:51:07.440862       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.107.45.122"}
	I0923 23:51:17.941715       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.196.237"}
	E0923 23:51:21.346831       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0923 23:51:21.352171       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	I0923 23:51:34.424950       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:51:34.425002       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:51:34.446073       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:51:34.446156       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:51:34.457239       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:51:34.457293       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:51:34.551869       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:51:34.551920       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 23:51:34.552866       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 23:51:34.552893       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 23:51:35.457776       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 23:51:35.553772       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 23:51:35.570901       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [75b86394e4b7] <==
	W0923 23:51:23.349154       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:23.349190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:51:23.642649       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0923 23:51:23.642695       1 shared_informer.go:320] Caches are synced for garbage collector
	I0923 23:51:25.108833       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-537454"
	I0923 23:51:25.475729       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0923 23:51:27.567687       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:27.567724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:51:28.101115       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-attacher"
	I0923 23:51:28.157569       1 stateful_set.go:466] "StatefulSet has been deleted" logger="statefulset-controller" key="kube-system/csi-hostpath-resizer"
	I0923 23:51:29.251503       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-537454"
	I0923 23:51:29.743078       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I0923 23:51:34.639840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-56fcc65765" duration="5.913µs"
	E0923 23:51:35.459070       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 23:51:35.555125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 23:51:35.572197       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:51:36.561937       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:36.561981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:51:36.582680       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:36.582717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 23:51:37.047239       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:37.047277       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 23:51:37.150396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.225µs"
	W0923 23:51:38.255662       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 23:51:38.255706       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [e28401b0850d] <==
	I0923 23:38:55.856823       1 server_linux.go:66] "Using iptables proxy"
	I0923 23:38:56.446367       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 23:38:56.446444       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 23:38:56.854144       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 23:38:56.854206       1 server_linux.go:169] "Using iptables Proxier"
	I0923 23:38:56.936653       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 23:38:56.940741       1 server.go:483] "Version info" version="v1.31.1"
	I0923 23:38:56.940768       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 23:38:56.949377       1 config.go:199] "Starting service config controller"
	I0923 23:38:56.949415       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 23:38:56.949454       1 config.go:105] "Starting endpoint slice config controller"
	I0923 23:38:56.949460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 23:38:56.950058       1 config.go:328] "Starting node config controller"
	I0923 23:38:56.950068       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 23:38:57.050537       1 shared_informer.go:320] Caches are synced for node config
	I0923 23:38:57.050572       1 shared_informer.go:320] Caches are synced for service config
	I0923 23:38:57.050607       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [613e2fd9c4f5] <==
	W0923 23:38:45.960795       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 23:38:45.960863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:45.960898       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 23:38:45.960939       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0923 23:38:46.769210       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0923 23:38:46.769247       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:46.841792       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 23:38:46.841831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:46.874325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 23:38:46.874368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:46.920746       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0923 23:38:46.920786       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:46.993577       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 23:38:46.993623       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:47.018923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 23:38:47.018958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:47.067554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 23:38:47.067599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:47.141914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 23:38:47.141959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:47.153158       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 23:38:47.153190       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 23:38:47.288585       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0923 23:38:47.288622       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0923 23:38:50.458389       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 23:51:35 addons-537454 kubelet[2454]: E0923 23:51:35.594475    2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: e29e6fe8be07ab8848183d3cf71d948552d6d215a2c56d6b13d9087df2c9614a" containerID="e29e6fe8be07ab8848183d3cf71d948552d6d215a2c56d6b13d9087df2c9614a"
	Sep 23 23:51:35 addons-537454 kubelet[2454]: I0923 23:51:35.594516    2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"e29e6fe8be07ab8848183d3cf71d948552d6d215a2c56d6b13d9087df2c9614a"} err="failed to get container status \"e29e6fe8be07ab8848183d3cf71d948552d6d215a2c56d6b13d9087df2c9614a\": rpc error: code = Unknown desc = Error response from daemon: No such container: e29e6fe8be07ab8848183d3cf71d948552d6d215a2c56d6b13d9087df2c9614a"
	Sep 23 23:51:36 addons-537454 kubelet[2454]: E0923 23:51:36.456163    2454 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="8443b885-eed7-4568-8d34-7c92b8fba271"
	Sep 23 23:51:36 addons-537454 kubelet[2454]: I0923 23:51:36.461171    2454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df8ff41-a477-43f2-b65f-71742df438c6" path="/var/lib/kubelet/pods/0df8ff41-a477-43f2-b65f-71742df438c6/volumes"
	Sep 23 23:51:36 addons-537454 kubelet[2454]: I0923 23:51:36.461496    2454 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f99f77f-2e6a-41b5-ba6f-03af5b5dc541" path="/var/lib/kubelet/pods/5f99f77f-2e6a-41b5-ba6f-03af5b5dc541/volumes"
	Sep 23 23:51:36 addons-537454 kubelet[2454]: I0923 23:51:36.887168    2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p6tq\" (UniqueName: \"kubernetes.io/projected/1deea5b3-386d-42b6-b299-7437f6c4451c-kube-api-access-4p6tq\") pod \"1deea5b3-386d-42b6-b299-7437f6c4451c\" (UID: \"1deea5b3-386d-42b6-b299-7437f6c4451c\") "
	Sep 23 23:51:36 addons-537454 kubelet[2454]: I0923 23:51:36.887234    2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1deea5b3-386d-42b6-b299-7437f6c4451c-gcp-creds\") pod \"1deea5b3-386d-42b6-b299-7437f6c4451c\" (UID: \"1deea5b3-386d-42b6-b299-7437f6c4451c\") "
	Sep 23 23:51:36 addons-537454 kubelet[2454]: I0923 23:51:36.887278    2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1deea5b3-386d-42b6-b299-7437f6c4451c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1deea5b3-386d-42b6-b299-7437f6c4451c" (UID: "1deea5b3-386d-42b6-b299-7437f6c4451c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 23:51:36 addons-537454 kubelet[2454]: I0923 23:51:36.888943    2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1deea5b3-386d-42b6-b299-7437f6c4451c-kube-api-access-4p6tq" (OuterVolumeSpecName: "kube-api-access-4p6tq") pod "1deea5b3-386d-42b6-b299-7437f6c4451c" (UID: "1deea5b3-386d-42b6-b299-7437f6c4451c"). InnerVolumeSpecName "kube-api-access-4p6tq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:51:36 addons-537454 kubelet[2454]: I0923 23:51:36.987743    2454 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4p6tq\" (UniqueName: \"kubernetes.io/projected/1deea5b3-386d-42b6-b299-7437f6c4451c-kube-api-access-4p6tq\") on node \"addons-537454\" DevicePath \"\""
	Sep 23 23:51:36 addons-537454 kubelet[2454]: I0923 23:51:36.987779    2454 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1deea5b3-386d-42b6-b299-7437f6c4451c-gcp-creds\") on node \"addons-537454\" DevicePath \"\""
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.490791    2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhdtd\" (UniqueName: \"kubernetes.io/projected/09f0475c-4746-427a-ab8c-9c11b2ee2bfa-kube-api-access-xhdtd\") pod \"09f0475c-4746-427a-ab8c-9c11b2ee2bfa\" (UID: \"09f0475c-4746-427a-ab8c-9c11b2ee2bfa\") "
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.493011    2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f0475c-4746-427a-ab8c-9c11b2ee2bfa-kube-api-access-xhdtd" (OuterVolumeSpecName: "kube-api-access-xhdtd") pod "09f0475c-4746-427a-ab8c-9c11b2ee2bfa" (UID: "09f0475c-4746-427a-ab8c-9c11b2ee2bfa"). InnerVolumeSpecName "kube-api-access-xhdtd". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.591396    2454 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgdfc\" (UniqueName: \"kubernetes.io/projected/d8940b72-00d5-4d8d-94d1-657f7a3dfea2-kube-api-access-jgdfc\") pod \"d8940b72-00d5-4d8d-94d1-657f7a3dfea2\" (UID: \"d8940b72-00d5-4d8d-94d1-657f7a3dfea2\") "
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.591485    2454 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xhdtd\" (UniqueName: \"kubernetes.io/projected/09f0475c-4746-427a-ab8c-9c11b2ee2bfa-kube-api-access-xhdtd\") on node \"addons-537454\" DevicePath \"\""
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.593587    2454 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8940b72-00d5-4d8d-94d1-657f7a3dfea2-kube-api-access-jgdfc" (OuterVolumeSpecName: "kube-api-access-jgdfc") pod "d8940b72-00d5-4d8d-94d1-657f7a3dfea2" (UID: "d8940b72-00d5-4d8d-94d1-657f7a3dfea2"). InnerVolumeSpecName "kube-api-access-jgdfc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.601978    2454 scope.go:117] "RemoveContainer" containerID="656ebda72ff9432698135abf334852baea2b72e14fe264f0d3bf1d403d929e3e"
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.619730    2454 scope.go:117] "RemoveContainer" containerID="656ebda72ff9432698135abf334852baea2b72e14fe264f0d3bf1d403d929e3e"
	Sep 23 23:51:37 addons-537454 kubelet[2454]: E0923 23:51:37.620556    2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 656ebda72ff9432698135abf334852baea2b72e14fe264f0d3bf1d403d929e3e" containerID="656ebda72ff9432698135abf334852baea2b72e14fe264f0d3bf1d403d929e3e"
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.620610    2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"656ebda72ff9432698135abf334852baea2b72e14fe264f0d3bf1d403d929e3e"} err="failed to get container status \"656ebda72ff9432698135abf334852baea2b72e14fe264f0d3bf1d403d929e3e\": rpc error: code = Unknown desc = Error response from daemon: No such container: 656ebda72ff9432698135abf334852baea2b72e14fe264f0d3bf1d403d929e3e"
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.620641    2454 scope.go:117] "RemoveContainer" containerID="4f40b57f6b1a1c843d88711144dc31f16c256afadfd1bf2cd63a22f50437d035"
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.639061    2454 scope.go:117] "RemoveContainer" containerID="4f40b57f6b1a1c843d88711144dc31f16c256afadfd1bf2cd63a22f50437d035"
	Sep 23 23:51:37 addons-537454 kubelet[2454]: E0923 23:51:37.640567    2454 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4f40b57f6b1a1c843d88711144dc31f16c256afadfd1bf2cd63a22f50437d035" containerID="4f40b57f6b1a1c843d88711144dc31f16c256afadfd1bf2cd63a22f50437d035"
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.640616    2454 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4f40b57f6b1a1c843d88711144dc31f16c256afadfd1bf2cd63a22f50437d035"} err="failed to get container status \"4f40b57f6b1a1c843d88711144dc31f16c256afadfd1bf2cd63a22f50437d035\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4f40b57f6b1a1c843d88711144dc31f16c256afadfd1bf2cd63a22f50437d035"
	Sep 23 23:51:37 addons-537454 kubelet[2454]: I0923 23:51:37.691728    2454 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jgdfc\" (UniqueName: \"kubernetes.io/projected/d8940b72-00d5-4d8d-94d1-657f7a3dfea2-kube-api-access-jgdfc\") on node \"addons-537454\" DevicePath \"\""
	
	
	==> storage-provisioner [9238e7698239] <==
	I0923 23:39:01.935766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 23:39:01.949520       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 23:39:01.949587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 23:39:02.037623       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 23:39:02.038658       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4631d281-519e-43a4-8019-7b7519de8e27", APIVersion:"v1", ResourceVersion:"637", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-537454_fa828641-d4f7-484e-aec9-b64448350b3c became leader
	I0923 23:39:02.038699       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-537454_fa828641-d4f7-484e-aec9-b64448350b3c!
	I0923 23:39:02.138888       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-537454_fa828641-d4f7-484e-aec9-b64448350b3c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-537454 -n addons-537454
helpers_test.go:261: (dbg) Run:  kubectl --context addons-537454 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-537454 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-537454 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-537454/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 23:42:23 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-skvtk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-skvtk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m15s                  default-scheduler  Successfully assigned default/busybox to addons-537454
	  Normal   Pulling    7m44s (x4 over 9m15s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m44s (x4 over 9m15s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m44s (x4 over 9m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m33s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m8s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (73.49s)

                                                
                                    

Test pass (321/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.82
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 11.8
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 0.97
21 TestBinaryMirror 0.74
22 TestOffline 80.42
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 212.36
29 TestAddons/serial/Volcano 41.71
31 TestAddons/serial/GCPAuth/Namespaces 0.11
34 TestAddons/parallel/Ingress 19.88
35 TestAddons/parallel/InspektorGadget 11.6
36 TestAddons/parallel/MetricsServer 6.64
38 TestAddons/parallel/CSI 51.66
39 TestAddons/parallel/Headlamp 19.3
40 TestAddons/parallel/CloudSpanner 5.42
41 TestAddons/parallel/LocalPath 55.06
42 TestAddons/parallel/NvidiaDevicePlugin 5.6
43 TestAddons/parallel/Yakd 10.55
44 TestAddons/StoppedEnableDisable 11.09
45 TestCertOptions 32.92
46 TestCertExpiration 234.77
47 TestDockerFlags 32.79
48 TestForceSystemdFlag 30.75
49 TestForceSystemdEnv 36.67
51 TestKVMDriverInstallOrUpdate 4.65
55 TestErrorSpam/setup 22.25
56 TestErrorSpam/start 0.55
57 TestErrorSpam/status 0.83
58 TestErrorSpam/pause 1.11
59 TestErrorSpam/unpause 1.33
60 TestErrorSpam/stop 1.91
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 59.17
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 26.17
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.21
72 TestFunctional/serial/CacheCmd/cache/add_local 1.43
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.17
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 36.8
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 0.94
83 TestFunctional/serial/LogsFileCmd 0.97
84 TestFunctional/serial/InvalidService 3.96
86 TestFunctional/parallel/ConfigCmd 0.34
87 TestFunctional/parallel/DashboardCmd 10.29
88 TestFunctional/parallel/DryRun 0.47
89 TestFunctional/parallel/InternationalLanguage 0.19
90 TestFunctional/parallel/StatusCmd 1.05
94 TestFunctional/parallel/ServiceCmdConnect 6.91
95 TestFunctional/parallel/AddonsCmd 0.18
96 TestFunctional/parallel/PersistentVolumeClaim 36.81
98 TestFunctional/parallel/SSHCmd 0.65
99 TestFunctional/parallel/CpCmd 1.68
100 TestFunctional/parallel/MySQL 25.35
101 TestFunctional/parallel/FileSync 0.26
102 TestFunctional/parallel/CertSync 1.8
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
110 TestFunctional/parallel/License 0.6
111 TestFunctional/parallel/ServiceCmd/DeployApp 10.18
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
113 TestFunctional/parallel/MountCmd/any-port 7.88
114 TestFunctional/parallel/ProfileCmd/profile_list 0.42
115 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
116 TestFunctional/parallel/Version/short 0.06
117 TestFunctional/parallel/Version/components 0.6
118 TestFunctional/parallel/DockerEnv/bash 0.84
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
126 TestFunctional/parallel/ImageCommands/ImageBuild 4.85
127 TestFunctional/parallel/ImageCommands/Setup 1.92
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
129 TestFunctional/parallel/MountCmd/specific-port 1.7
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.88
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.75
132 TestFunctional/parallel/ServiceCmd/List 0.61
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.56
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
137 TestFunctional/parallel/ServiceCmd/Format 0.37
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
139 TestFunctional/parallel/ServiceCmd/URL 0.33
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.32
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 99.91
160 TestMultiControlPlane/serial/DeployApp 5.26
161 TestMultiControlPlane/serial/PingHostFromPods 1.03
162 TestMultiControlPlane/serial/AddWorkerNode 19.87
163 TestMultiControlPlane/serial/NodeLabels 0.07
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.81
165 TestMultiControlPlane/serial/CopyFile 14.84
166 TestMultiControlPlane/serial/StopSecondaryNode 11.28
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.65
168 TestMultiControlPlane/serial/RestartSecondaryNode 19.42
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.91
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 213.48
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.27
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.64
173 TestMultiControlPlane/serial/StopCluster 32.51
174 TestMultiControlPlane/serial/RestartCluster 81.05
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.62
176 TestMultiControlPlane/serial/AddSecondaryNode 32.58
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
180 TestImageBuild/serial/Setup 21.57
181 TestImageBuild/serial/NormalBuild 2.76
182 TestImageBuild/serial/BuildWithBuildArg 0.91
183 TestImageBuild/serial/BuildWithDockerIgnore 0.94
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
188 TestJSONOutput/start/Command 63.31
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.5
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.43
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.78
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.2
213 TestKicCustomNetwork/create_custom_network 25.92
214 TestKicCustomNetwork/use_default_bridge_network 25.73
215 TestKicExistingNetwork 22.23
216 TestKicCustomSubnet 23.03
217 TestKicStaticIP 26.04
218 TestMainNoArgs 0.04
219 TestMinikubeProfile 47.7
222 TestMountStart/serial/StartWithMountFirst 10.32
223 TestMountStart/serial/VerifyMountFirst 0.23
224 TestMountStart/serial/StartWithMountSecond 10.29
225 TestMountStart/serial/VerifyMountSecond 0.23
226 TestMountStart/serial/DeleteFirst 1.46
227 TestMountStart/serial/VerifyMountPostDelete 0.23
228 TestMountStart/serial/Stop 1.17
229 TestMountStart/serial/RestartStopped 8.7
230 TestMountStart/serial/VerifyMountPostStop 0.23
233 TestMultiNode/serial/FreshStart2Nodes 72.54
234 TestMultiNode/serial/DeployApp2Nodes 39.81
235 TestMultiNode/serial/PingHostFrom2Pods 0.71
236 TestMultiNode/serial/AddNode 18.65
237 TestMultiNode/serial/MultiNodeLabels 0.06
238 TestMultiNode/serial/ProfileList 0.59
239 TestMultiNode/serial/CopyFile 8.65
240 TestMultiNode/serial/StopNode 2.07
241 TestMultiNode/serial/StartAfterStop 9.6
242 TestMultiNode/serial/RestartKeepsNodes 108.25
243 TestMultiNode/serial/DeleteNode 5.14
244 TestMultiNode/serial/StopMultiNode 21.31
245 TestMultiNode/serial/RestartMultiNode 54.48
246 TestMultiNode/serial/ValidateNameConflict 26.22
251 TestPreload 109.04
253 TestScheduledStopUnix 94.17
254 TestSkaffold 104.93
256 TestInsufficientStorage 12.52
257 TestRunningBinaryUpgrade 60.56
259 TestKubernetesUpgrade 341.77
260 TestMissingContainerUpgrade 103.56
261 TestStoppedBinaryUpgrade/Setup 2.9
264 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
272 TestPause/serial/Start 40.94
273 TestNoKubernetes/serial/StartWithK8s 32.05
274 TestStoppedBinaryUpgrade/Upgrade 146.69
275 TestNoKubernetes/serial/StartWithStopK8s 7.19
276 TestNoKubernetes/serial/Start 10.11
277 TestPause/serial/SecondStartNoReconfiguration 33.58
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
279 TestNoKubernetes/serial/ProfileList 16.06
280 TestNoKubernetes/serial/Stop 1.25
281 TestNoKubernetes/serial/StartNoArgs 7.66
282 TestPause/serial/Pause 0.52
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.24
284 TestPause/serial/VerifyStatus 0.29
285 TestPause/serial/Unpause 0.41
286 TestPause/serial/PauseAgain 0.6
287 TestPause/serial/DeletePaused 2.31
288 TestPause/serial/VerifyDeletedResources 1.81
300 TestStoppedBinaryUpgrade/MinikubeLogs 1.42
302 TestStartStop/group/old-k8s-version/serial/FirstStart 133.71
304 TestStartStop/group/no-preload/serial/FirstStart 69.92
305 TestStartStop/group/no-preload/serial/DeployApp 8.27
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.85
307 TestStartStop/group/no-preload/serial/Stop 10.86
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
309 TestStartStop/group/no-preload/serial/SecondStart 263.58
310 TestStartStop/group/old-k8s-version/serial/DeployApp 9.42
311 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.75
312 TestStartStop/group/old-k8s-version/serial/Stop 10.82
313 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/old-k8s-version/serial/SecondStart 131.54
316 TestStartStop/group/embed-certs/serial/FirstStart 39.53
317 TestStartStop/group/embed-certs/serial/DeployApp 9.29
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.79
319 TestStartStop/group/embed-certs/serial/Stop 10.75
320 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
321 TestStartStop/group/embed-certs/serial/SecondStart 302.95
322 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
323 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
325 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 43.75
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
327 TestStartStop/group/old-k8s-version/serial/Pause 2.76
329 TestStartStop/group/newest-cni/serial/FirstStart 29.12
330 TestStartStop/group/newest-cni/serial/DeployApp 0
331 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.97
332 TestStartStop/group/newest-cni/serial/Stop 10.8
333 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
335 TestStartStop/group/newest-cni/serial/SecondStart 15.54
336 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
337 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.84
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 286.95
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.21
343 TestStartStop/group/newest-cni/serial/Pause 2.35
344 TestNetworkPlugins/group/auto/Start 38.78
345 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
347 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
348 TestStartStop/group/no-preload/serial/Pause 2.5
349 TestNetworkPlugins/group/kindnet/Start 57.15
350 TestNetworkPlugins/group/auto/KubeletFlags 0.28
351 TestNetworkPlugins/group/auto/NetCatPod 8.19
352 TestNetworkPlugins/group/auto/DNS 26.66
353 TestNetworkPlugins/group/auto/Localhost 0.12
354 TestNetworkPlugins/group/auto/HairPin 0.11
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.49
357 TestNetworkPlugins/group/kindnet/NetCatPod 11.24
358 TestNetworkPlugins/group/calico/Start 64.35
359 TestNetworkPlugins/group/kindnet/DNS 0.17
360 TestNetworkPlugins/group/kindnet/Localhost 0.13
361 TestNetworkPlugins/group/kindnet/HairPin 0.11
362 TestNetworkPlugins/group/custom-flannel/Start 48.41
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.28
365 TestNetworkPlugins/group/calico/NetCatPod 10.21
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
367 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.17
368 TestNetworkPlugins/group/calico/DNS 0.14
369 TestNetworkPlugins/group/calico/Localhost 0.13
370 TestNetworkPlugins/group/calico/HairPin 0.13
371 TestNetworkPlugins/group/custom-flannel/DNS 0.13
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
374 TestNetworkPlugins/group/false/Start 65.22
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
376 TestNetworkPlugins/group/enable-default-cni/Start 68.95
377 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.09
378 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
379 TestStartStop/group/embed-certs/serial/Pause 2.66
380 TestNetworkPlugins/group/flannel/Start 43.58
381 TestNetworkPlugins/group/false/KubeletFlags 0.28
382 TestNetworkPlugins/group/false/NetCatPod 9.18
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
385 TestNetworkPlugins/group/flannel/NetCatPod 9.17
386 TestNetworkPlugins/group/false/DNS 0.16
387 TestNetworkPlugins/group/false/Localhost 0.12
388 TestNetworkPlugins/group/false/HairPin 0.13
389 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
390 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
391 TestNetworkPlugins/group/flannel/DNS 0.15
392 TestNetworkPlugins/group/flannel/Localhost 0.12
393 TestNetworkPlugins/group/flannel/HairPin 0.12
394 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
395 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
396 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
397 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
398 TestNetworkPlugins/group/bridge/Start 43.95
399 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
400 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.37
401 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.12
402 TestNetworkPlugins/group/kubenet/Start 33.13
403 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
404 TestNetworkPlugins/group/bridge/NetCatPod 9.18
405 TestNetworkPlugins/group/kubenet/KubeletFlags 0.3
406 TestNetworkPlugins/group/kubenet/NetCatPod 10.25
407 TestNetworkPlugins/group/bridge/DNS 0.13
408 TestNetworkPlugins/group/bridge/Localhost 0.11
409 TestNetworkPlugins/group/bridge/HairPin 0.11
410 TestNetworkPlugins/group/kubenet/DNS 21.02
411 TestNetworkPlugins/group/kubenet/Localhost 0.11
412 TestNetworkPlugins/group/kubenet/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (18.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-028402 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-028402 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (18.820717715s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (18.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 23:37:54.386298   14219 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0923 23:37:54.386399   14219 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-028402
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-028402: exit status 85 (57.385189ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-028402 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |          |
	|         | -p download-only-028402        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:37:35
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:37:35.601635   14231 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:37:35.601782   14231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:35.601792   14231 out.go:358] Setting ErrFile to fd 2...
	I0923 23:37:35.601799   14231 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:35.601976   14231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
	W0923 23:37:35.602142   14231 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19696-7438/.minikube/config/config.json: open /home/jenkins/minikube-integration/19696-7438/.minikube/config/config.json: no such file or directory
	I0923 23:37:35.602732   14231 out.go:352] Setting JSON to true
	I0923 23:37:35.603643   14231 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1200,"bootTime":1727133456,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:37:35.603736   14231 start.go:139] virtualization: kvm guest
	I0923 23:37:35.606087   14231 out.go:97] [download-only-028402] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 23:37:35.606207   14231 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 23:37:35.606279   14231 notify.go:220] Checking for updates...
	I0923 23:37:35.607651   14231 out.go:169] MINIKUBE_LOCATION=19696
	I0923 23:37:35.608957   14231 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:37:35.610244   14231 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	I0923 23:37:35.611441   14231 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	I0923 23:37:35.612616   14231 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 23:37:35.614792   14231 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 23:37:35.615002   14231 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:37:35.637290   14231 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 23:37:35.637365   14231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 23:37:36.007286   14231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 23:37:35.996883094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 23:37:36.007423   14231 docker.go:318] overlay module found
	I0923 23:37:36.009506   14231 out.go:97] Using the docker driver based on user configuration
	I0923 23:37:36.009538   14231 start.go:297] selected driver: docker
	I0923 23:37:36.009546   14231 start.go:901] validating driver "docker" against <nil>
	I0923 23:37:36.009636   14231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 23:37:36.057217   14231 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 23:37:36.04793238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 23:37:36.057418   14231 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:37:36.057977   14231 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0923 23:37:36.058209   14231 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 23:37:36.060328   14231 out.go:169] Using Docker driver with root privileges
	I0923 23:37:36.061861   14231 cni.go:84] Creating CNI manager for ""
	I0923 23:37:36.061941   14231 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0923 23:37:36.062032   14231 start.go:340] cluster config:
	{Name:download-only-028402 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-028402 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:37:36.063594   14231 out.go:97] Starting "download-only-028402" primary control-plane node in "download-only-028402" cluster
	I0923 23:37:36.063628   14231 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 23:37:36.065037   14231 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0923 23:37:36.065064   14231 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 23:37:36.065123   14231 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0923 23:37:36.080958   14231 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0923 23:37:36.081117   14231 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0923 23:37:36.081202   14231 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0923 23:37:36.221664   14231 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 23:37:36.221697   14231 cache.go:56] Caching tarball of preloaded images
	I0923 23:37:36.221877   14231 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0923 23:37:36.223869   14231 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 23:37:36.223893   14231 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0923 23:37:36.330771   14231 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0923 23:37:49.807444   14231 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0923 23:37:49.807540   14231 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-028402 host does not exist
	  To start a cluster, run: "minikube start -p download-only-028402"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-028402
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (11.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-856379 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-856379 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.79588558s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (11.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 23:38:06.569116   14219 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0923 23:38:06.569150   14219 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-856379
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-856379: exit status 85 (60.321492ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-028402 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |                     |
	|         | -p download-only-028402        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| delete  | -p download-only-028402        | download-only-028402 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC | 23 Sep 24 23:37 UTC |
	| start   | -o=json --download-only        | download-only-856379 | jenkins | v1.34.0 | 23 Sep 24 23:37 UTC |                     |
	|         | -p download-only-856379        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 23:37:54
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 23:37:54.810828   14642 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:37:54.811082   14642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:54.811091   14642 out.go:358] Setting ErrFile to fd 2...
	I0923 23:37:54.811095   14642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:37:54.811264   14642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
	I0923 23:37:54.811801   14642 out.go:352] Setting JSON to true
	I0923 23:37:54.812622   14642 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1219,"bootTime":1727133456,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:37:54.812709   14642 start.go:139] virtualization: kvm guest
	I0923 23:37:54.815100   14642 out.go:97] [download-only-856379] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:37:54.815264   14642 notify.go:220] Checking for updates...
	I0923 23:37:54.816778   14642 out.go:169] MINIKUBE_LOCATION=19696
	I0923 23:37:54.818105   14642 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:37:54.819454   14642 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	I0923 23:37:54.821068   14642 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	I0923 23:37:54.822436   14642 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 23:37:54.825110   14642 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 23:37:54.825438   14642 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:37:54.847224   14642 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 23:37:54.847359   14642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 23:37:54.893808   14642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 23:37:54.885179441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 23:37:54.893914   14642 docker.go:318] overlay module found
	I0923 23:37:54.895674   14642 out.go:97] Using the docker driver based on user configuration
	I0923 23:37:54.895699   14642 start.go:297] selected driver: docker
	I0923 23:37:54.895707   14642 start.go:901] validating driver "docker" against <nil>
	I0923 23:37:54.895800   14642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 23:37:54.940223   14642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 23:37:54.931845997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 23:37:54.940418   14642 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 23:37:54.940911   14642 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0923 23:37:54.941041   14642 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 23:37:54.942862   14642 out.go:169] Using Docker driver with root privileges
	I0923 23:37:54.944009   14642 cni.go:84] Creating CNI manager for ""
	I0923 23:37:54.944069   14642 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0923 23:37:54.944083   14642 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0923 23:37:54.944150   14642 start.go:340] cluster config:
	{Name:download-only-856379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-856379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:37:54.945474   14642 out.go:97] Starting "download-only-856379" primary control-plane node in "download-only-856379" cluster
	I0923 23:37:54.945501   14642 cache.go:121] Beginning downloading kic base image for docker with docker
	I0923 23:37:54.946698   14642 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0923 23:37:54.946718   14642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 23:37:54.946824   14642 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0923 23:37:54.961781   14642 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0923 23:37:54.961924   14642 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0923 23:37:54.961943   14642 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0923 23:37:54.961948   14642 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0923 23:37:54.961956   14642 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0923 23:37:55.427677   14642 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0923 23:37:55.427708   14642 cache.go:56] Caching tarball of preloaded images
	I0923 23:37:55.427889   14642 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0923 23:37:55.429677   14642 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 23:37:55.429697   14642 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0923 23:37:55.534415   14642 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /home/jenkins/minikube-integration/19696-7438/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-856379 host does not exist
	  To start a cluster, run: "minikube start -p download-only-856379"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-856379
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.97s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-467922 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-467922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-467922
--- PASS: TestDownloadOnlyKic (0.97s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 23:38:08.168105   14219 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-675850 --alsologtostderr --binary-mirror http://127.0.0.1:43631 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-675850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-675850
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (80.42s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-455716 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-455716 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m17.603265593s)
helpers_test.go:175: Cleaning up "offline-docker-455716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-455716
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-455716: (2.818547006s)
--- PASS: TestOffline (80.42s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-537454
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-537454: exit status 85 (46.851821ms)

                                                
                                                
-- stdout --
	* Profile "addons-537454" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-537454"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-537454
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-537454: exit status 85 (45.562131ms)

                                                
                                                
-- stdout --
	* Profile "addons-537454" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-537454"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (212.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-537454 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-537454 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m32.361527071s)
--- PASS: TestAddons/Setup (212.36s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.71s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 14.494688ms
addons_test.go:843: volcano-admission stabilized in 14.560311ms
addons_test.go:851: volcano-controller stabilized in 14.759993ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-2pmf2" [da388443-f09a-42f6-915b-5ce8e50798fa] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.002588729s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-lb89d" [3991d909-5721-4b6a-9a4a-d06a3d8d9276] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003498972s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-blhvg" [fbec47d1-54ba-400a-a2f5-f2ab75833027] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003563214s
addons_test.go:870: (dbg) Run:  kubectl --context addons-537454 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-537454 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-537454 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ccbe2754-f4af-4207-8ad7-7546077f261f] Pending
helpers_test.go:344: "test-job-nginx-0" [ccbe2754-f4af-4207-8ad7-7546077f261f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ccbe2754-f4af-4207-8ad7-7546077f261f] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.003830531s
addons_test.go:906: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-amd64 -p addons-537454 addons disable volcano --alsologtostderr -v=1: (10.35458909s)
--- PASS: TestAddons/serial/Volcano (41.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-537454 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-537454 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-537454 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-537454 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-537454 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [eb8159a7-4ef8-4bec-bf82-a02fb42a33d5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [eb8159a7-4ef8-4bec-bf82-a02fb42a33d5] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003735186s
I0923 23:51:17.451713   14219 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-537454 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-537454 addons disable ingress-dns --alsologtostderr -v=1: (1.062413906s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-537454 addons disable ingress --alsologtostderr -v=1: (7.571810733s)
--- PASS: TestAddons/parallel/Ingress (19.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.6s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-97w4j" [deddcb26-1ede-4ab6-a3fc-3d7401fb4f36] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003645929s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-537454
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-537454: (5.593792025s)
--- PASS: TestAddons/parallel/InspektorGadget (11.60s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.265549ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-vhxvb" [2b86c7cb-1ba3-4c6e-a24b-d7efc7056acf] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004386244s
addons_test.go:413: (dbg) Run:  kubectl --context addons-537454 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.64s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0923 23:50:43.115811   14219 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0923 23:50:43.119756   14219 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 23:50:43.119776   14219 kapi.go:107] duration metric: took 3.986475ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 3.993447ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-537454 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-537454 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [01393108-d2c9-4312-9927-76463dc305b2] Pending
helpers_test.go:344: "task-pv-pod" [01393108-d2c9-4312-9927-76463dc305b2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [01393108-d2c9-4312-9927-76463dc305b2] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003086645s
addons_test.go:528: (dbg) Run:  kubectl --context addons-537454 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-537454 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-537454 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-537454 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-537454 delete pod task-pv-pod: (1.259147997s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-537454 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-537454 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-537454 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a6620266-7bc4-499b-8a39-f657293144eb] Pending
helpers_test.go:344: "task-pv-pod-restore" [a6620266-7bc4-499b-8a39-f657293144eb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a6620266-7bc4-499b-8a39-f657293144eb] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003293999s
addons_test.go:570: (dbg) Run:  kubectl --context addons-537454 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-537454 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-537454 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-537454 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.426349211s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-537454 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-krrkr" [362337ab-a53a-46bb-b896-e1cc83455620] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-krrkr" [362337ab-a53a-46bb-b896-e1cc83455620] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.003832994s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-537454 addons disable headlamp --alsologtostderr -v=1: (5.560150742s)
--- PASS: TestAddons/parallel/Headlamp (19.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-f8qfp" [694455f3-21ba-4b9d-b7b7-e5dc59a3d2ee] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003506857s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-537454
--- PASS: TestAddons/parallel/CloudSpanner (5.42s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.06s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-537454 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-537454 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-537454 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [80be9d50-fae6-4674-8b1c-042fe012c263] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [80be9d50-fae6-4674-8b1c-042fe012c263] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [80be9d50-fae6-4674-8b1c-042fe012c263] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003512918s
addons_test.go:938: (dbg) Run:  kubectl --context addons-537454 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 ssh "cat /opt/local-path-provisioner/pvc-b1079101-08ea-46c0-97ce-99eeccde2570_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-537454 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-537454 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-537454 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.191023988s)
--- PASS: TestAddons/parallel/LocalPath (55.06s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-27tsr" [b325bd7a-05f1-473f-bbfb-9f57ff7e8bfd] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004267095s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-537454
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-qgczf" [2c6440ca-4ebf-49b9-912e-4716751828e3] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003813538s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-537454 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-537454 addons disable yakd --alsologtostderr -v=1: (5.542270502s)
--- PASS: TestAddons/parallel/Yakd (10.55s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-537454
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-537454: (10.860233219s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-537454
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-537454
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-537454
--- PASS: TestAddons/StoppedEnableDisable (11.09s)

                                                
                                    
x
+
TestCertOptions (32.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-439049 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-439049 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (30.366933513s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-439049 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-439049 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-439049 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-439049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-439049
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-439049: (1.986120885s)
--- PASS: TestCertOptions (32.92s)

                                                
                                    
x
+
TestCertExpiration (234.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-491077 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-491077 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (32.1126964s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-491077 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-491077 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (20.188935842s)
helpers_test.go:175: Cleaning up "cert-expiration-491077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-491077
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-491077: (2.467359131s)
--- PASS: TestCertExpiration (234.77s)

                                                
                                    
x
+
TestDockerFlags (32.79s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-897084 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-897084 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (30.088229068s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-897084 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-897084 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-897084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-897084
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-897084: (2.1994006s)
--- PASS: TestDockerFlags (32.79s)

                                                
                                    
x
+
TestForceSystemdFlag (30.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-670900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-670900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.055007407s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-670900 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-670900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-670900
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-670900: (2.327678148s)
--- PASS: TestForceSystemdFlag (30.75s)

                                                
                                    
x
+
TestForceSystemdEnv (36.67s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-937378 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-937378 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.403449083s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-937378 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-937378" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-937378
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-937378: (2.92488311s)
--- PASS: TestForceSystemdEnv (36.67s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0924 00:23:19.453792   14219 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0924 00:23:19.453940   14219 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0924 00:23:19.484648   14219 install.go:62] docker-machine-driver-kvm2: exit status 1
W0924 00:23:19.484978   14219 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0924 00:23:19.485036   14219 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4193148192/001/docker-machine-driver-kvm2
I0924 00:23:19.741303   14219 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4193148192/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00074b450 gz:0xc00074b458 tar:0xc00074b400 tar.bz2:0xc00074b410 tar.gz:0xc00074b420 tar.xz:0xc00074b430 tar.zst:0xc00074b440 tbz2:0xc00074b410 tgz:0xc00074b420 txz:0xc00074b430 tzst:0xc00074b440 xz:0xc00074b460 zip:0xc00074b470 zst:0xc00074b468] Getters:map[file:0xc0018813e0 http:0xc0004d2d70 https:0xc0004d2dc0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0924 00:23:19.741367   14219 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4193148192/001/docker-machine-driver-kvm2
I0924 00:23:22.280133   14219 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0924 00:23:22.280210   14219 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0924 00:23:22.311680   14219 install.go:137] /home/jenkins/workspace/Docker_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0924 00:23:22.311715   14219 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0924 00:23:22.311773   14219 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0924 00:23:22.311803   14219 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4193148192/002/docker-machine-driver-kvm2
I0924 00:23:22.368208   14219 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate4193148192/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc00074b450 gz:0xc00074b458 tar:0xc00074b400 tar.bz2:0xc00074b410 tar.gz:0xc00074b420 tar.xz:0xc00074b430 tar.zst:0xc00074b440 tbz2:0xc00074b410 tgz:0xc00074b420 txz:0xc00074b430 tzst:0xc00074b440 xz:0xc00074b460 zip:0xc00074b470 zst:0xc00074b468] Getters:map[file:0xc0008e7780 http:0xc000a2e550 https:0xc000a2e5a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0924 00:23:22.368270   14219 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate4193148192/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (4.65s)

                                                
                                    
x
+
TestErrorSpam/setup (22.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-802327 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-802327 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-802327 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-802327 --driver=docker  --container-runtime=docker: (22.252992975s)
--- PASS: TestErrorSpam/setup (22.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 pause
--- PASS: TestErrorSpam/pause (1.11s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 unpause
--- PASS: TestErrorSpam/unpause (1.33s)

                                                
                                    
x
+
TestErrorSpam/stop (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 stop: (1.737885612s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-802327 --log_dir /tmp/nospam-802327 stop
--- PASS: TestErrorSpam/stop (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19696-7438/.minikube/files/etc/test/nested/copy/14219/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-978241 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-978241 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (59.170205911s)
--- PASS: TestFunctional/serial/StartWithProxy (59.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.17s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 23:53:21.404471   14219 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-978241 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-978241 --alsologtostderr -v=8: (26.165662505s)
functional_test.go:663: soft start took 26.166412329s for "functional-978241" cluster.
I0923 23:53:47.570499   14219 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (26.17s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-978241 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-978241 /tmp/TestFunctionalserialCacheCmdcacheadd_local4087744929/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 cache add minikube-local-cache-test:functional-978241
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-978241 cache add minikube-local-cache-test:functional-978241: (1.096687946s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 cache delete minikube-local-cache-test:functional-978241
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-978241
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-978241 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (245.223032ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 kubectl -- --context functional-978241 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-978241 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.8s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-978241 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-978241 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.803258115s)
functional_test.go:761: restart took 36.80343702s for "functional-978241" cluster.
I0923 23:54:29.928977   14219 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (36.80s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-978241 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.94s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 logs
--- PASS: TestFunctional/serial/LogsCmd (0.94s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 logs --file /tmp/TestFunctionalserialLogsFileCmd1825364102/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.97s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.96s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-978241 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-978241
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-978241: exit status 115 (302.149751ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30952 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-978241 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-978241 config get cpus: exit status 14 (84.04352ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-978241 config get cpus: exit status 14 (43.533913ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-978241 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-978241 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 64576: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-978241 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-978241 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (202.697882ms)

                                                
                                                
-- stdout --
	* [functional-978241] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 23:54:37.816203   63270 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:54:37.816365   63270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:54:37.816378   63270 out.go:358] Setting ErrFile to fd 2...
	I0923 23:54:37.816384   63270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:54:37.816767   63270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
	I0923 23:54:37.817631   63270 out.go:352] Setting JSON to false
	I0923 23:54:37.819182   63270 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2222,"bootTime":1727133456,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:54:37.819268   63270 start.go:139] virtualization: kvm guest
	I0923 23:54:37.822307   63270 out.go:177] * [functional-978241] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 23:54:37.824278   63270 notify.go:220] Checking for updates...
	I0923 23:54:37.824290   63270 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:54:37.825801   63270 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:54:37.827051   63270 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	I0923 23:54:37.828527   63270 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	I0923 23:54:37.830043   63270 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:54:37.831442   63270 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:54:37.834762   63270 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:54:37.835477   63270 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:54:37.877507   63270 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 23:54:37.877621   63270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 23:54:37.935430   63270 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 23:54:37.924278951 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 23:54:37.935569   63270 docker.go:318] overlay module found
	I0923 23:54:37.938816   63270 out.go:177] * Using the docker driver based on existing profile
	I0923 23:54:37.940249   63270 start.go:297] selected driver: docker
	I0923 23:54:37.940261   63270 start.go:901] validating driver "docker" against &{Name:functional-978241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-978241 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:54:37.940346   63270 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:54:37.942583   63270 out.go:201] 
	W0923 23:54:37.944006   63270 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 23:54:37.945525   63270 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-978241 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-978241 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-978241 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (192.88714ms)

                                                
                                                
-- stdout --
	* [functional-978241] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 23:54:37.632281   63136 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:54:37.632606   63136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:54:37.632616   63136 out.go:358] Setting ErrFile to fd 2...
	I0923 23:54:37.632621   63136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:54:37.633148   63136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
	I0923 23:54:37.633672   63136 out.go:352] Setting JSON to false
	I0923 23:54:37.635312   63136 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":2222,"bootTime":1727133456,"procs":413,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 23:54:37.635400   63136 start.go:139] virtualization: kvm guest
	I0923 23:54:37.638510   63136 out.go:177] * [functional-978241] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0923 23:54:37.640319   63136 out.go:177]   - MINIKUBE_LOCATION=19696
	I0923 23:54:37.640347   63136 notify.go:220] Checking for updates...
	I0923 23:54:37.643326   63136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 23:54:37.644967   63136 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	I0923 23:54:37.646438   63136 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	I0923 23:54:37.648083   63136 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 23:54:37.649585   63136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 23:54:37.651515   63136 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:54:37.652156   63136 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 23:54:37.679699   63136 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 23:54:37.679799   63136 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 23:54:37.732878   63136 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 23:54:37.722868568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 23:54:37.733009   63136 docker.go:318] overlay module found
	I0923 23:54:37.734959   63136 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 23:54:37.736452   63136 start.go:297] selected driver: docker
	I0923 23:54:37.736467   63136 start.go:901] validating driver "docker" against &{Name:functional-978241 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-978241 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 23:54:37.736583   63136 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 23:54:37.739235   63136 out.go:201] 
	W0923 23:54:37.740758   63136 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 23:54:37.742288   63136 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-978241 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-978241 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-97w5k" [a1099ef4-8ffe-4c7e-985a-9d7dae5b5319] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-97w5k" [a1099ef4-8ffe-4c7e-985a-9d7dae5b5319] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.00396251s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30174
functional_test.go:1675: http://192.168.49.2:30174: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-97w5k

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30174
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [991ff14b-11a3-4dea-9d94-efb322fec01a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003496502s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-978241 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-978241 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-978241 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-978241 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3f3198e0-2c1e-400f-87d9-5b03cb7612cf] Pending
helpers_test.go:344: "sp-pod" [3f3198e0-2c1e-400f-87d9-5b03cb7612cf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3f3198e0-2c1e-400f-87d9-5b03cb7612cf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.003925958s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-978241 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-978241 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-978241 delete -f testdata/storage-provisioner/pod.yaml: (1.128186483s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-978241 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0000ccac-b1b0-4df2-b162-4735c7d3a119] Pending
helpers_test.go:344: "sp-pod" [0000ccac-b1b0-4df2-b162-4735c7d3a119] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0000ccac-b1b0-4df2-b162-4735c7d3a119] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004002255s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-978241 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.81s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh -n functional-978241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 cp functional-978241:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd935906570/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh -n functional-978241 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh -n functional-978241 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-978241 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-nzzd5" [cade5589-74f9-4539-8de7-06de77289fbc] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-nzzd5" [cade5589-74f9-4539-8de7-06de77289fbc] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003788387s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-978241 exec mysql-6cdb49bbb-nzzd5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-978241 exec mysql-6cdb49bbb-nzzd5 -- mysql -ppassword -e "show databases;": exit status 1 (113.422504ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 23:55:09.994644   14219 retry.go:31] will retry after 751.952387ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-978241 exec mysql-6cdb49bbb-nzzd5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-978241 exec mysql-6cdb49bbb-nzzd5 -- mysql -ppassword -e "show databases;": exit status 1 (118.016553ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 23:55:10.865391   14219 retry.go:31] will retry after 2.137481222s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-978241 exec mysql-6cdb49bbb-nzzd5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-978241 exec mysql-6cdb49bbb-nzzd5 -- mysql -ppassword -e "show databases;": exit status 1 (104.156108ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 23:55:13.107587   14219 retry.go:31] will retry after 1.792794198s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-978241 exec mysql-6cdb49bbb-nzzd5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/14219/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo cat /etc/test/nested/copy/14219/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/14219.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo cat /etc/ssl/certs/14219.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/14219.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo cat /usr/share/ca-certificates/14219.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/142192.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo cat /etc/ssl/certs/142192.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/142192.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo cat /usr/share/ca-certificates/142192.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-978241 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-978241 ssh "sudo systemctl is-active crio": exit status 1 (373.946949ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-978241 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-978241 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-hhtbd" [44209a29-f782-429e-bad4-733c3ae432b1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-hhtbd" [44209a29-f782-429e-bad4-733c3ae432b1] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.004481225s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdany-port785980921/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727135676476211112" to /tmp/TestFunctionalparallelMountCmdany-port785980921/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727135676476211112" to /tmp/TestFunctionalparallelMountCmdany-port785980921/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727135676476211112" to /tmp/TestFunctionalparallelMountCmdany-port785980921/001/test-1727135676476211112
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (306.170363ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 23:54:36.782727   14219 retry.go:31] will retry after 321.110312ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 23:54 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 23:54 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 23:54 test-1727135676476211112
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh cat /mount-9p/test-1727135676476211112
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-978241 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [60d58c34-c368-4828-b947-41efe169db7e] Pending
helpers_test.go:344: "busybox-mount" [60d58c34-c368-4828-b947-41efe169db7e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [60d58c34-c368-4828-b947-41efe169db7e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [60d58c34-c368-4828-b947-41efe169db7e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003683076s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-978241 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdany-port785980921/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "364.375452ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "57.642841ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "413.261127ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "53.991995ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-978241 docker-env) && out/minikube-linux-amd64 status -p functional-978241"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-978241 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-978241 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-978241
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-978241
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-978241 image ls --format short --alsologtostderr:
I0923 23:54:57.150828   70022 out.go:345] Setting OutFile to fd 1 ...
I0923 23:54:57.150948   70022 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:57.150959   70022 out.go:358] Setting ErrFile to fd 2...
I0923 23:54:57.150965   70022 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:57.151244   70022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
I0923 23:54:57.151994   70022 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:57.152143   70022 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:57.152581   70022 cli_runner.go:164] Run: docker container inspect functional-978241 --format={{.State.Status}}
I0923 23:54:57.170532   70022 ssh_runner.go:195] Run: systemctl --version
I0923 23:54:57.170593   70022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-978241
I0923 23:54:57.191118   70022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/functional-978241/id_rsa Username:docker}
I0923 23:54:57.283322   70022 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-978241 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-978241 | ea133e58276a6 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/kicbase/echo-server               | functional-978241 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-978241 image ls --format table --alsologtostderr:
I0923 23:54:58.812544   70420 out.go:345] Setting OutFile to fd 1 ...
I0923 23:54:58.812801   70420 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:58.812811   70420 out.go:358] Setting ErrFile to fd 2...
I0923 23:54:58.812815   70420 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:58.813038   70420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
I0923 23:54:58.813670   70420 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:58.813784   70420 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:58.814207   70420 cli_runner.go:164] Run: docker container inspect functional-978241 --format={{.State.Status}}
I0923 23:54:58.830938   70420 ssh_runner.go:195] Run: systemctl --version
I0923 23:54:58.830983   70420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-978241
I0923 23:54:58.851395   70420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/functional-978241/id_rsa Username:docker}
I0923 23:54:58.939206   70420 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-978241 image ls --format json --alsologtostderr:
[{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"ea133e58276a69e167ba90df3234275ba31c6902f7cf3f4e8c11842f631a27e8","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-978241"],"size":"30"}
,{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5e
edcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-978241"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-978241 image ls --format json --alsologtostderr:
I0923 23:54:58.608862   70369 out.go:345] Setting OutFile to fd 1 ...
I0923 23:54:58.609159   70369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:58.609172   70369 out.go:358] Setting ErrFile to fd 2...
I0923 23:54:58.609179   70369 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:58.609574   70369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
I0923 23:54:58.610678   70369 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:58.610837   70369 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:58.611393   70369 cli_runner.go:164] Run: docker container inspect functional-978241 --format={{.State.Status}}
I0923 23:54:58.628702   70369 ssh_runner.go:195] Run: systemctl --version
I0923 23:54:58.628747   70369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-978241
I0923 23:54:58.648180   70369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/functional-978241/id_rsa Username:docker}
I0923 23:54:58.734629   70369 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-978241 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: ea133e58276a69e167ba90df3234275ba31c6902f7cf3f4e8c11842f631a27e8
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-978241
size: "30"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-978241
size: "4940000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-978241 image ls --format yaml --alsologtostderr:
I0923 23:54:57.361285   70068 out.go:345] Setting OutFile to fd 1 ...
I0923 23:54:57.361413   70068 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:57.361423   70068 out.go:358] Setting ErrFile to fd 2...
I0923 23:54:57.361428   70068 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:57.361695   70068 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
I0923 23:54:57.362518   70068 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:57.362686   70068 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:57.363259   70068 cli_runner.go:164] Run: docker container inspect functional-978241 --format={{.State.Status}}
I0923 23:54:57.384130   70068 ssh_runner.go:195] Run: systemctl --version
I0923 23:54:57.384190   70068 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-978241
I0923 23:54:57.403313   70068 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/functional-978241/id_rsa Username:docker}
I0923 23:54:57.486298   70068 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-978241 ssh pgrep buildkitd: exit status 1 (263.219188ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image build -t localhost/my-image:functional-978241 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-978241 image build -t localhost/my-image:functional-978241 testdata/build --alsologtostderr: (4.374238139s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-978241 image build -t localhost/my-image:functional-978241 testdata/build --alsologtostderr:
I0923 23:54:57.829191   70212 out.go:345] Setting OutFile to fd 1 ...
I0923 23:54:57.829535   70212 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:57.829546   70212 out.go:358] Setting ErrFile to fd 2...
I0923 23:54:57.829554   70212 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 23:54:57.829806   70212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
I0923 23:54:57.830635   70212 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:57.831329   70212 config.go:182] Loaded profile config "functional-978241": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0923 23:54:57.831969   70212 cli_runner.go:164] Run: docker container inspect functional-978241 --format={{.State.Status}}
I0923 23:54:57.850546   70212 ssh_runner.go:195] Run: systemctl --version
I0923 23:54:57.850602   70212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-978241
I0923 23:54:57.870967   70212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/functional-978241/id_rsa Username:docker}
I0923 23:54:57.954256   70212 build_images.go:161] Building image from path: /tmp/build.1083284124.tar
I0923 23:54:57.954315   70212 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 23:54:57.962442   70212 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1083284124.tar
I0923 23:54:57.966099   70212 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1083284124.tar: stat -c "%s %y" /var/lib/minikube/build/build.1083284124.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1083284124.tar': No such file or directory
I0923 23:54:57.966127   70212 ssh_runner.go:362] scp /tmp/build.1083284124.tar --> /var/lib/minikube/build/build.1083284124.tar (3072 bytes)
I0923 23:54:57.991407   70212 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1083284124
I0923 23:54:58.000870   70212 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1083284124 -xf /var/lib/minikube/build/build.1083284124.tar
I0923 23:54:58.009452   70212 docker.go:360] Building image: /var/lib/minikube/build/build.1083284124
I0923 23:54:58.009516   70212 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-978241 /var/lib/minikube/build/build.1083284124
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:40ff8d56a65f1b2a109aad14c5c77f4e3dd0ee332f8071d83874df671a6f5cf5 done
#8 naming to localhost/my-image:functional-978241 done
#8 DONE 0.0s
I0923 23:55:02.128573   70212 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-978241 /var/lib/minikube/build/build.1083284124: (4.119028925s)
I0923 23:55:02.128659   70212 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1083284124
I0923 23:55:02.138738   70212 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1083284124.tar
I0923 23:55:02.147996   70212 build_images.go:217] Built localhost/my-image:functional-978241 from /tmp/build.1083284124.tar
I0923 23:55:02.148030   70212 build_images.go:133] succeeded building to: functional-978241
I0923 23:55:02.148035   70212 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.899405318s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-978241
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image load --daemon kicbase/echo-server:functional-978241 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdspecific-port56387550/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.86312ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 23:54:44.640133   14219 retry.go:31] will retry after 371.188952ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdspecific-port56387550/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-978241 ssh "sudo umount -f /mount-9p": exit status 1 (274.51372ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-978241 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdspecific-port56387550/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image load --daemon kicbase/echo-server:functional-978241 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-978241
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image load --daemon kicbase/echo-server:functional-978241 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2732908796/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2732908796/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2732908796/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T" /mount1: exit status 1 (362.242525ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 23:54:46.422856   14219 retry.go:31] will retry after 478.430997ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-978241 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2732908796/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2732908796/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-978241 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2732908796/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 service list -o json
functional_test.go:1494: Took "561.835625ms" to run "out/minikube-linux-amd64 -p functional-978241 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32411
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image save kicbase/echo-server:functional-978241 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image rm kicbase/echo-server:functional-978241 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32411
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
2024/09/23 23:54:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-978241 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-978241 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-978241 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-978241 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 68119: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-978241
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-978241 image save --daemon kicbase/echo-server:functional-978241 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-978241
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-978241 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-978241 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ce98a9b8-24e7-4fab-9651-1fea370c132f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ce98a9b8-24e7-4fab-9651-1fea370c132f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003451632s
I0923 23:54:58.382152   14219 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-978241 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.18.166 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-978241 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-978241
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-978241
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-978241
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (99.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-494395 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0923 23:56:41.325140   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:41.331634   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:41.343363   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:41.365630   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:41.407074   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:41.488545   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:41.650563   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:41.972584   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:42.614470   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:43.896794   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:46.459007   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:56:51.580238   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:57:01.822195   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-494395 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m39.261956927s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (99.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-494395 -- rollout status deployment/busybox: (3.384841553s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-2hcds -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-77tq4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-c54f4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-2hcds -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-77tq4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-c54f4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-2hcds -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-77tq4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-c54f4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-2hcds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-2hcds -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-77tq4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-77tq4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-c54f4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-494395 -- exec busybox-7dff88458-c54f4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (19.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-494395 -v=7 --alsologtostderr
E0923 23:57:22.303797   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-494395 -v=7 --alsologtostderr: (19.079586551s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (19.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-494395 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp testdata/cp-test.txt ha-494395:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile300840644/001/cp-test_ha-494395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395:/home/docker/cp-test.txt ha-494395-m02:/home/docker/cp-test_ha-494395_ha-494395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m02 "sudo cat /home/docker/cp-test_ha-494395_ha-494395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395:/home/docker/cp-test.txt ha-494395-m03:/home/docker/cp-test_ha-494395_ha-494395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m03 "sudo cat /home/docker/cp-test_ha-494395_ha-494395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395:/home/docker/cp-test.txt ha-494395-m04:/home/docker/cp-test_ha-494395_ha-494395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m04 "sudo cat /home/docker/cp-test_ha-494395_ha-494395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp testdata/cp-test.txt ha-494395-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile300840644/001/cp-test_ha-494395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m02:/home/docker/cp-test.txt ha-494395:/home/docker/cp-test_ha-494395-m02_ha-494395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395 "sudo cat /home/docker/cp-test_ha-494395-m02_ha-494395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m02:/home/docker/cp-test.txt ha-494395-m03:/home/docker/cp-test_ha-494395-m02_ha-494395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m03 "sudo cat /home/docker/cp-test_ha-494395-m02_ha-494395-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m02:/home/docker/cp-test.txt ha-494395-m04:/home/docker/cp-test_ha-494395-m02_ha-494395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m04 "sudo cat /home/docker/cp-test_ha-494395-m02_ha-494395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp testdata/cp-test.txt ha-494395-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile300840644/001/cp-test_ha-494395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m03:/home/docker/cp-test.txt ha-494395:/home/docker/cp-test_ha-494395-m03_ha-494395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395 "sudo cat /home/docker/cp-test_ha-494395-m03_ha-494395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m03:/home/docker/cp-test.txt ha-494395-m02:/home/docker/cp-test_ha-494395-m03_ha-494395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m02 "sudo cat /home/docker/cp-test_ha-494395-m03_ha-494395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m03:/home/docker/cp-test.txt ha-494395-m04:/home/docker/cp-test_ha-494395-m03_ha-494395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m04 "sudo cat /home/docker/cp-test_ha-494395-m03_ha-494395-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp testdata/cp-test.txt ha-494395-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile300840644/001/cp-test_ha-494395-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m04:/home/docker/cp-test.txt ha-494395:/home/docker/cp-test_ha-494395-m04_ha-494395.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395 "sudo cat /home/docker/cp-test_ha-494395-m04_ha-494395.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m04:/home/docker/cp-test.txt ha-494395-m02:/home/docker/cp-test_ha-494395-m04_ha-494395-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m02 "sudo cat /home/docker/cp-test_ha-494395-m04_ha-494395-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 cp ha-494395-m04:/home/docker/cp-test.txt ha-494395-m03:/home/docker/cp-test_ha-494395-m04_ha-494395-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 ssh -n ha-494395-m03 "sudo cat /home/docker/cp-test_ha-494395-m04_ha-494395-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-494395 node stop m02 -v=7 --alsologtostderr: (10.636197867s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr: exit status 7 (646.035151ms)

                                                
                                                
-- stdout --
	ha-494395
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-494395-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-494395-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-494395-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 23:58:00.056101   98308 out.go:345] Setting OutFile to fd 1 ...
	I0923 23:58:00.056451   98308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:58:00.056463   98308 out.go:358] Setting ErrFile to fd 2...
	I0923 23:58:00.056469   98308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 23:58:00.056722   98308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
	I0923 23:58:00.056949   98308 out.go:352] Setting JSON to false
	I0923 23:58:00.056983   98308 mustload.go:65] Loading cluster: ha-494395
	I0923 23:58:00.057020   98308 notify.go:220] Checking for updates...
	I0923 23:58:00.057414   98308 config.go:182] Loaded profile config "ha-494395": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0923 23:58:00.057433   98308 status.go:174] checking status of ha-494395 ...
	I0923 23:58:00.057979   98308 cli_runner.go:164] Run: docker container inspect ha-494395 --format={{.State.Status}}
	I0923 23:58:00.076533   98308 status.go:364] ha-494395 host status = "Running" (err=<nil>)
	I0923 23:58:00.076570   98308 host.go:66] Checking if "ha-494395" exists ...
	I0923 23:58:00.076819   98308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-494395
	I0923 23:58:00.094545   98308 host.go:66] Checking if "ha-494395" exists ...
	I0923 23:58:00.094787   98308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 23:58:00.094820   98308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-494395
	I0923 23:58:00.111772   98308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/ha-494395/id_rsa Username:docker}
	I0923 23:58:00.198927   98308 ssh_runner.go:195] Run: systemctl --version
	I0923 23:58:00.202779   98308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 23:58:00.213025   98308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 23:58:00.264947   98308 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-23 23:58:00.255245591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 23:58:00.265473   98308 kubeconfig.go:125] found "ha-494395" server: "https://192.168.49.254:8443"
	I0923 23:58:00.265505   98308 api_server.go:166] Checking apiserver status ...
	I0923 23:58:00.265552   98308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:58:00.276740   98308 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2421/cgroup
	I0923 23:58:00.286002   98308 api_server.go:182] apiserver freezer: "7:freezer:/docker/761b3d70f4e24aaae52eb4c9d6b83fd1b52a9ce8138293b122fcd40cfd9942e8/kubepods/burstable/podbc6d3c26c1d9730e34dbc18a1f4ed40f/d9f1c240f8090ad6919ea1ae5b3d642847167f49b37b68e7bb7f625ca44255df"
	I0923 23:58:00.286126   98308 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/761b3d70f4e24aaae52eb4c9d6b83fd1b52a9ce8138293b122fcd40cfd9942e8/kubepods/burstable/podbc6d3c26c1d9730e34dbc18a1f4ed40f/d9f1c240f8090ad6919ea1ae5b3d642847167f49b37b68e7bb7f625ca44255df/freezer.state
	I0923 23:58:00.294307   98308 api_server.go:204] freezer state: "THAWED"
	I0923 23:58:00.294335   98308 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 23:58:00.298098   98308 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 23:58:00.298123   98308 status.go:456] ha-494395 apiserver status = Running (err=<nil>)
	I0923 23:58:00.298135   98308 status.go:176] ha-494395 status: &{Name:ha-494395 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 23:58:00.298156   98308 status.go:174] checking status of ha-494395-m02 ...
	I0923 23:58:00.298411   98308 cli_runner.go:164] Run: docker container inspect ha-494395-m02 --format={{.State.Status}}
	I0923 23:58:00.317136   98308 status.go:364] ha-494395-m02 host status = "Stopped" (err=<nil>)
	I0923 23:58:00.317162   98308 status.go:377] host is not running, skipping remaining checks
	I0923 23:58:00.317168   98308 status.go:176] ha-494395-m02 status: &{Name:ha-494395-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 23:58:00.317194   98308 status.go:174] checking status of ha-494395-m03 ...
	I0923 23:58:00.317479   98308 cli_runner.go:164] Run: docker container inspect ha-494395-m03 --format={{.State.Status}}
	I0923 23:58:00.335191   98308 status.go:364] ha-494395-m03 host status = "Running" (err=<nil>)
	I0923 23:58:00.335215   98308 host.go:66] Checking if "ha-494395-m03" exists ...
	I0923 23:58:00.335537   98308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-494395-m03
	I0923 23:58:00.355248   98308 host.go:66] Checking if "ha-494395-m03" exists ...
	I0923 23:58:00.355493   98308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 23:58:00.355525   98308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-494395-m03
	I0923 23:58:00.373598   98308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/ha-494395-m03/id_rsa Username:docker}
	I0923 23:58:00.458918   98308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 23:58:00.470698   98308 kubeconfig.go:125] found "ha-494395" server: "https://192.168.49.254:8443"
	I0923 23:58:00.470723   98308 api_server.go:166] Checking apiserver status ...
	I0923 23:58:00.470752   98308 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 23:58:00.481336   98308 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2244/cgroup
	I0923 23:58:00.491835   98308 api_server.go:182] apiserver freezer: "7:freezer:/docker/5494ddda3471b9f4aeaa922ed8fbb11ba593da93e0799179df7bb455679ea81d/kubepods/burstable/pod1416b8321e5757d1dd1c1356d00c80f3/9525b4f7c9499ee21dd09481fd665d9c2732da9d2a9c35d648af51c3e8b3ebfa"
	I0923 23:58:00.491900   98308 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/5494ddda3471b9f4aeaa922ed8fbb11ba593da93e0799179df7bb455679ea81d/kubepods/burstable/pod1416b8321e5757d1dd1c1356d00c80f3/9525b4f7c9499ee21dd09481fd665d9c2732da9d2a9c35d648af51c3e8b3ebfa/freezer.state
	I0923 23:58:00.500526   98308 api_server.go:204] freezer state: "THAWED"
	I0923 23:58:00.500563   98308 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 23:58:00.504408   98308 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 23:58:00.504437   98308 status.go:456] ha-494395-m03 apiserver status = Running (err=<nil>)
	I0923 23:58:00.504445   98308 status.go:176] ha-494395-m03 status: &{Name:ha-494395-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 23:58:00.504474   98308 status.go:174] checking status of ha-494395-m04 ...
	I0923 23:58:00.504724   98308 cli_runner.go:164] Run: docker container inspect ha-494395-m04 --format={{.State.Status}}
	I0923 23:58:00.522690   98308 status.go:364] ha-494395-m04 host status = "Running" (err=<nil>)
	I0923 23:58:00.522711   98308 host.go:66] Checking if "ha-494395-m04" exists ...
	I0923 23:58:00.522955   98308 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-494395-m04
	I0923 23:58:00.540755   98308 host.go:66] Checking if "ha-494395-m04" exists ...
	I0923 23:58:00.541077   98308 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 23:58:00.541122   98308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-494395-m04
	I0923 23:58:00.559404   98308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/ha-494395-m04/id_rsa Username:docker}
	I0923 23:58:00.643131   98308 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 23:58:00.653961   98308 status.go:176] ha-494395-m04 status: &{Name:ha-494395-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 node start m02 -v=7 --alsologtostderr
E0923 23:58:03.266175   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-494395 node start m02 -v=7 --alsologtostderr: (18.087800645s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr: (1.241095787s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (213.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-494395 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-494395 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-494395 -v=7 --alsologtostderr: (33.70964902s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-494395 --wait=true -v=7 --alsologtostderr
E0923 23:59:25.189312   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:36.046461   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:36.052900   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:36.064276   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:36.085701   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:36.127259   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:36.208721   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:36.370208   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:36.691431   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:37.333513   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:38.615242   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:41.176717   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:46.298515   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0923 23:59:56.539956   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:00:17.021840   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:00:57.983666   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:01:41.324910   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-494395 --wait=true -v=7 --alsologtostderr: (2m59.673508697s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-494395
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (213.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-494395 node delete m03 -v=7 --alsologtostderr: (8.539500856s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 stop -v=7 --alsologtostderr
E0924 00:02:09.032793   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:02:19.906192   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-494395 stop -v=7 --alsologtostderr: (32.40672983s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr: exit status 7 (98.083015ms)

                                                
                                                
-- stdout --
	ha-494395
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-494395-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-494395-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:02:37.478407  128252 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:02:37.478504  128252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:02:37.478512  128252 out.go:358] Setting ErrFile to fd 2...
	I0924 00:02:37.478517  128252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:02:37.478684  128252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
	I0924 00:02:37.478841  128252 out.go:352] Setting JSON to false
	I0924 00:02:37.478872  128252 mustload.go:65] Loading cluster: ha-494395
	I0924 00:02:37.478987  128252 notify.go:220] Checking for updates...
	I0924 00:02:37.479290  128252 config.go:182] Loaded profile config "ha-494395": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 00:02:37.479309  128252 status.go:174] checking status of ha-494395 ...
	I0924 00:02:37.479754  128252 cli_runner.go:164] Run: docker container inspect ha-494395 --format={{.State.Status}}
	I0924 00:02:37.498994  128252 status.go:364] ha-494395 host status = "Stopped" (err=<nil>)
	I0924 00:02:37.499037  128252 status.go:377] host is not running, skipping remaining checks
	I0924 00:02:37.499052  128252 status.go:176] ha-494395 status: &{Name:ha-494395 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:02:37.499100  128252 status.go:174] checking status of ha-494395-m02 ...
	I0924 00:02:37.499469  128252 cli_runner.go:164] Run: docker container inspect ha-494395-m02 --format={{.State.Status}}
	I0924 00:02:37.516311  128252 status.go:364] ha-494395-m02 host status = "Stopped" (err=<nil>)
	I0924 00:02:37.516333  128252 status.go:377] host is not running, skipping remaining checks
	I0924 00:02:37.516341  128252 status.go:176] ha-494395-m02 status: &{Name:ha-494395-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:02:37.516372  128252 status.go:174] checking status of ha-494395-m04 ...
	I0924 00:02:37.516601  128252 cli_runner.go:164] Run: docker container inspect ha-494395-m04 --format={{.State.Status}}
	I0924 00:02:37.534148  128252 status.go:364] ha-494395-m04 host status = "Stopped" (err=<nil>)
	I0924 00:02:37.534167  128252 status.go:377] host is not running, skipping remaining checks
	I0924 00:02:37.534172  128252 status.go:176] ha-494395-m04 status: &{Name:ha-494395-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-494395 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-494395 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.306126697s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (32.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-494395 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-494395 --control-plane -v=7 --alsologtostderr: (31.771851853s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-494395 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (32.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.57s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-577861 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-577861 --driver=docker  --container-runtime=docker: (21.570823459s)
--- PASS: TestImageBuild/serial/Setup (21.57s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-577861
E0924 00:05:03.749205   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-577861: (2.757031478s)
--- PASS: TestImageBuild/serial/NormalBuild (2.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-577861
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-577861
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-577861
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (63.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-127349 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-127349 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m3.306441259s)
--- PASS: TestJSONOutput/start/Command (63.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-127349 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.43s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-127349 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.43s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-127349 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-127349 --output=json --user=testUser: (10.777447867s)
--- PASS: TestJSONOutput/stop/Command (10.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-403172 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-403172 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (63.872854ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"02c16965-598f-4f3a-a6bb-52e31f699022","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-403172] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f54fae10-3900-4a62-b8f3-9078e807cf61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19696"}}
	{"specversion":"1.0","id":"cfb9c6ab-f218-44d0-90c9-bbbcf7c76832","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e3f802bc-f5b7-4e0e-b2c5-b8752ebac7e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig"}}
	{"specversion":"1.0","id":"e85c6082-91ea-4c39-bafe-92a695aae6e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube"}}
	{"specversion":"1.0","id":"8b41e7d0-040e-4902-8a4e-0e310f7783fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bdeca33d-df4f-42f5-9f65-a76ed4a1f06f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e06c60f0-61f0-4c90-983c-55a7f543b979","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-403172" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-403172
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.92s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-974056 --network=
E0924 00:06:41.325522   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-974056 --network=: (23.939175913s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-974056" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-974056
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-974056: (1.966354329s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.92s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-223369 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-223369 --network=bridge: (23.827221966s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-223369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-223369
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-223369: (1.888320546s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.73s)

                                                
                                    
x
+
TestKicExistingNetwork (22.23s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0924 00:07:17.617883   14219 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0924 00:07:17.634266   14219 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0924 00:07:17.634342   14219 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0924 00:07:17.634357   14219 cli_runner.go:164] Run: docker network inspect existing-network
W0924 00:07:17.650734   14219 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0924 00:07:17.650766   14219 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0924 00:07:17.650781   14219 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0924 00:07:17.650929   14219 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0924 00:07:17.667786   14219 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f905db64c630 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:39:88:d5:72} reservation:<nil>}
I0924 00:07:17.668272   14219 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c2f130}
I0924 00:07:17.668302   14219 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0924 00:07:17.668384   14219 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0924 00:07:17.730485   14219 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-506889 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-506889 --network=existing-network: (20.19385253s)
helpers_test.go:175: Cleaning up "existing-network-506889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-506889
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-506889: (1.889307107s)
I0924 00:07:39.831348   14219 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.23s)

                                                
                                    
x
+
TestKicCustomSubnet (23.03s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-375612 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-375612 --subnet=192.168.60.0/24: (20.953086664s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-375612 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-375612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-375612
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-375612: (2.054324959s)
--- PASS: TestKicCustomSubnet (23.03s)

                                                
                                    
x
+
TestKicStaticIP (26.04s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-866438 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-866438 --static-ip=192.168.200.200: (23.969602001s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-866438 ip
helpers_test.go:175: Cleaning up "static-ip-866438" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-866438
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-866438: (1.952108678s)
--- PASS: TestKicStaticIP (26.04s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (47.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-892784 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-892784 --driver=docker  --container-runtime=docker: (20.771229593s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-906251 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-906251 --driver=docker  --container-runtime=docker: (21.890978385s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-892784
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-906251
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-906251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-906251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-906251: (1.952810298s)
helpers_test.go:175: Cleaning up "first-892784" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-892784
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-892784: (1.98496804s)
--- PASS: TestMinikubeProfile (47.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-079993 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-079993 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.32441655s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-079993 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (10.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-095310 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
E0924 00:09:36.045762   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-095310 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.285316145s)
--- PASS: TestMountStart/serial/StartWithMountSecond (10.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-095310 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-079993 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-079993 --alsologtostderr -v=5: (1.46200219s)
--- PASS: TestMountStart/serial/DeleteFirst (1.46s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-095310 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-095310
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-095310: (1.1715877s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.7s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-095310
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-095310: (7.701959428s)
--- PASS: TestMountStart/serial/RestartStopped (8.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-095310 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (72.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-908377 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-908377 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m12.108503394s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (72.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (39.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-908377 -- rollout status deployment/busybox: (3.396157765s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 00:11:07.383562   14219 retry.go:31] will retry after 809.095417ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 00:11:08.303383   14219 retry.go:31] will retry after 1.347646269s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 00:11:09.758683   14219 retry.go:31] will retry after 1.583774115s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 00:11:11.449156   14219 retry.go:31] will retry after 4.899528408s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 00:11:16.456482   14219 retry.go:31] will retry after 4.00225309s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 00:11:20.569880   14219 retry.go:31] will retry after 4.756872036s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 00:11:25.435732   14219 retry.go:31] will retry after 6.915624474s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0924 00:11:32.459952   14219 retry.go:31] will retry after 9.859735562s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0924 00:11:41.328059   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-22b6x -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-5wsbt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-22b6x -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-5wsbt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-22b6x -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-5wsbt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (39.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-22b6x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-22b6x -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-5wsbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-908377 -- exec busybox-7dff88458-5wsbt -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-908377 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-908377 -v 3 --alsologtostderr: (18.042622017s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.65s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-908377 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp testdata/cp-test.txt multinode-908377:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp multinode-908377:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile942930738/001/cp-test_multinode-908377.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp multinode-908377:/home/docker/cp-test.txt multinode-908377-m02:/home/docker/cp-test_multinode-908377_multinode-908377-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m02 "sudo cat /home/docker/cp-test_multinode-908377_multinode-908377-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp multinode-908377:/home/docker/cp-test.txt multinode-908377-m03:/home/docker/cp-test_multinode-908377_multinode-908377-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m03 "sudo cat /home/docker/cp-test_multinode-908377_multinode-908377-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp testdata/cp-test.txt multinode-908377-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp multinode-908377-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile942930738/001/cp-test_multinode-908377-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp multinode-908377-m02:/home/docker/cp-test.txt multinode-908377:/home/docker/cp-test_multinode-908377-m02_multinode-908377.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377 "sudo cat /home/docker/cp-test_multinode-908377-m02_multinode-908377.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp multinode-908377-m02:/home/docker/cp-test.txt multinode-908377-m03:/home/docker/cp-test_multinode-908377-m02_multinode-908377-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m03 "sudo cat /home/docker/cp-test_multinode-908377-m02_multinode-908377-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp testdata/cp-test.txt multinode-908377-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp multinode-908377-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile942930738/001/cp-test_multinode-908377-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp multinode-908377-m03:/home/docker/cp-test.txt multinode-908377:/home/docker/cp-test_multinode-908377-m03_multinode-908377.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377 "sudo cat /home/docker/cp-test_multinode-908377-m03_multinode-908377.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 cp multinode-908377-m03:/home/docker/cp-test.txt multinode-908377-m02:/home/docker/cp-test_multinode-908377-m03_multinode-908377-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 ssh -n multinode-908377-m02 "sudo cat /home/docker/cp-test_multinode-908377-m03_multinode-908377-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-908377 node stop m03: (1.177541294s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-908377 status: exit status 7 (442.215756ms)

                                                
                                                
-- stdout --
	multinode-908377
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-908377-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-908377-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-908377 status --alsologtostderr: exit status 7 (452.165826ms)

                                                
                                                
-- stdout --
	multinode-908377
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-908377-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-908377-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:12:13.819280  214927 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:12:13.819539  214927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:12:13.819549  214927 out.go:358] Setting ErrFile to fd 2...
	I0924 00:12:13.819555  214927 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:12:13.819774  214927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
	I0924 00:12:13.820009  214927 out.go:352] Setting JSON to false
	I0924 00:12:13.820051  214927 mustload.go:65] Loading cluster: multinode-908377
	I0924 00:12:13.820171  214927 notify.go:220] Checking for updates...
	I0924 00:12:13.820634  214927 config.go:182] Loaded profile config "multinode-908377": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 00:12:13.820665  214927 status.go:174] checking status of multinode-908377 ...
	I0924 00:12:13.821170  214927 cli_runner.go:164] Run: docker container inspect multinode-908377 --format={{.State.Status}}
	I0924 00:12:13.839115  214927 status.go:364] multinode-908377 host status = "Running" (err=<nil>)
	I0924 00:12:13.839156  214927 host.go:66] Checking if "multinode-908377" exists ...
	I0924 00:12:13.839498  214927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-908377
	I0924 00:12:13.857687  214927 host.go:66] Checking if "multinode-908377" exists ...
	I0924 00:12:13.857976  214927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:12:13.858049  214927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-908377
	I0924 00:12:13.875936  214927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/multinode-908377/id_rsa Username:docker}
	I0924 00:12:13.959346  214927 ssh_runner.go:195] Run: systemctl --version
	I0924 00:12:13.963468  214927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:12:13.974317  214927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:12:14.022138  214927 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-24 00:12:14.011658739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647931392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0924 00:12:14.022681  214927 kubeconfig.go:125] found "multinode-908377" server: "https://192.168.67.2:8443"
	I0924 00:12:14.022707  214927 api_server.go:166] Checking apiserver status ...
	I0924 00:12:14.022751  214927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:12:14.033595  214927 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2351/cgroup
	I0924 00:12:14.043272  214927 api_server.go:182] apiserver freezer: "7:freezer:/docker/bd028cf80749fb46d659e30d010be0225a312e35f564b9847edb707af09159d8/kubepods/burstable/pod0c98f8cf693053138143831a5b97a9da/5896422976fa966d2b799e1ed2ae02d875c681bd5c79f364516e4939dd9ac97f"
	I0924 00:12:14.043347  214927 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bd028cf80749fb46d659e30d010be0225a312e35f564b9847edb707af09159d8/kubepods/burstable/pod0c98f8cf693053138143831a5b97a9da/5896422976fa966d2b799e1ed2ae02d875c681bd5c79f364516e4939dd9ac97f/freezer.state
	I0924 00:12:14.051672  214927 api_server.go:204] freezer state: "THAWED"
	I0924 00:12:14.051702  214927 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0924 00:12:14.056073  214927 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0924 00:12:14.056099  214927 status.go:456] multinode-908377 apiserver status = Running (err=<nil>)
	I0924 00:12:14.056109  214927 status.go:176] multinode-908377 status: &{Name:multinode-908377 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:12:14.056137  214927 status.go:174] checking status of multinode-908377-m02 ...
	I0924 00:12:14.056422  214927 cli_runner.go:164] Run: docker container inspect multinode-908377-m02 --format={{.State.Status}}
	I0924 00:12:14.074460  214927 status.go:364] multinode-908377-m02 host status = "Running" (err=<nil>)
	I0924 00:12:14.074485  214927 host.go:66] Checking if "multinode-908377-m02" exists ...
	I0924 00:12:14.074849  214927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-908377-m02
	I0924 00:12:14.093163  214927 host.go:66] Checking if "multinode-908377-m02" exists ...
	I0924 00:12:14.093430  214927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:12:14.093469  214927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-908377-m02
	I0924 00:12:14.111054  214927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19696-7438/.minikube/machines/multinode-908377-m02/id_rsa Username:docker}
	I0924 00:12:14.198827  214927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:12:14.209928  214927 status.go:176] multinode-908377-m02 status: &{Name:multinode-908377-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:12:14.209966  214927 status.go:174] checking status of multinode-908377-m03 ...
	I0924 00:12:14.210244  214927 cli_runner.go:164] Run: docker container inspect multinode-908377-m03 --format={{.State.Status}}
	I0924 00:12:14.227971  214927 status.go:364] multinode-908377-m03 host status = "Stopped" (err=<nil>)
	I0924 00:12:14.227992  214927 status.go:377] host is not running, skipping remaining checks
	I0924 00:12:14.227997  214927 status.go:176] multinode-908377-m03 status: &{Name:multinode-908377-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-908377 node start m03 -v=7 --alsologtostderr: (8.963166422s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (108.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-908377
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-908377
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-908377: (22.487210062s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-908377 --wait=true -v=8 --alsologtostderr
E0924 00:13:04.394536   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-908377 --wait=true -v=8 --alsologtostderr: (1m25.676014951s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-908377
--- PASS: TestMultiNode/serial/RestartKeepsNodes (108.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-908377 node delete m03: (4.600193647s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 stop
E0924 00:14:36.045831   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-908377 stop: (21.14341218s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-908377 status: exit status 7 (86.764189ms)

                                                
                                                
-- stdout --
	multinode-908377
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-908377-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-908377 status --alsologtostderr: exit status 7 (84.26452ms)

                                                
                                                
-- stdout --
	multinode-908377
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-908377-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:14:38.494391  230252 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:14:38.494687  230252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:14:38.494697  230252 out.go:358] Setting ErrFile to fd 2...
	I0924 00:14:38.494702  230252 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:14:38.494897  230252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-7438/.minikube/bin
	I0924 00:14:38.495078  230252 out.go:352] Setting JSON to false
	I0924 00:14:38.495110  230252 mustload.go:65] Loading cluster: multinode-908377
	I0924 00:14:38.495241  230252 notify.go:220] Checking for updates...
	I0924 00:14:38.495679  230252 config.go:182] Loaded profile config "multinode-908377": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0924 00:14:38.495707  230252 status.go:174] checking status of multinode-908377 ...
	I0924 00:14:38.496243  230252 cli_runner.go:164] Run: docker container inspect multinode-908377 --format={{.State.Status}}
	I0924 00:14:38.514999  230252 status.go:364] multinode-908377 host status = "Stopped" (err=<nil>)
	I0924 00:14:38.515045  230252 status.go:377] host is not running, skipping remaining checks
	I0924 00:14:38.515052  230252 status.go:176] multinode-908377 status: &{Name:multinode-908377 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:14:38.515118  230252 status.go:174] checking status of multinode-908377-m02 ...
	I0924 00:14:38.515415  230252 cli_runner.go:164] Run: docker container inspect multinode-908377-m02 --format={{.State.Status}}
	I0924 00:14:38.534307  230252 status.go:364] multinode-908377-m02 host status = "Stopped" (err=<nil>)
	I0924 00:14:38.534333  230252 status.go:377] host is not running, skipping remaining checks
	I0924 00:14:38.534339  230252 status.go:176] multinode-908377-m02 status: &{Name:multinode-908377-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-908377 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-908377 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (53.940454732s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-908377 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.48s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-908377
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-908377-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-908377-m02 --driver=docker  --container-runtime=docker: exit status 14 (65.012296ms)

                                                
                                                
-- stdout --
	* [multinode-908377-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-908377-m02' is duplicated with machine name 'multinode-908377-m02' in profile 'multinode-908377'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-908377-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-908377-m03 --driver=docker  --container-runtime=docker: (23.87583151s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-908377
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-908377: exit status 80 (251.918104ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-908377 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-908377-m03 already exists in multinode-908377-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-908377-m03
E0924 00:15:59.111248   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-908377-m03: (1.982889631s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.22s)

                                                
                                    
x
+
TestPreload (109.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-372762 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0924 00:16:41.324551   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-372762 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (51.07899628s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-372762 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-372762 image pull gcr.io/k8s-minikube/busybox: (2.42151302s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-372762
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-372762: (10.607461082s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-372762 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-372762 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (42.582005125s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-372762 image list
helpers_test.go:175: Cleaning up "test-preload-372762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-372762
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-372762: (2.160556764s)
--- PASS: TestPreload (109.04s)

                                                
                                    
x
+
TestScheduledStopUnix (94.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-812932 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-812932 --memory=2048 --driver=docker  --container-runtime=docker: (21.321358362s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-812932 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-812932 -n scheduled-stop-812932
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-812932 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0924 00:18:13.899105   14219 retry.go:31] will retry after 55.385µs: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.900257   14219 retry.go:31] will retry after 121.554µs: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.901329   14219 retry.go:31] will retry after 322.923µs: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.902454   14219 retry.go:31] will retry after 442.34µs: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.903577   14219 retry.go:31] will retry after 597.877µs: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.904733   14219 retry.go:31] will retry after 857.858µs: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.905860   14219 retry.go:31] will retry after 1.616233ms: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.908024   14219 retry.go:31] will retry after 2.38199ms: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.911253   14219 retry.go:31] will retry after 1.591057ms: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.913495   14219 retry.go:31] will retry after 3.83313ms: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.917717   14219 retry.go:31] will retry after 3.183944ms: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.921963   14219 retry.go:31] will retry after 7.926973ms: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.930200   14219 retry.go:31] will retry after 7.570984ms: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.938464   14219 retry.go:31] will retry after 28.490163ms: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
I0924 00:18:13.967747   14219 retry.go:31] will retry after 19.107134ms: open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/scheduled-stop-812932/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-812932 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-812932 -n scheduled-stop-812932
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-812932
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-812932 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-812932
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-812932: exit status 7 (60.011132ms)

                                                
                                                
-- stdout --
	scheduled-stop-812932
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-812932 -n scheduled-stop-812932
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-812932 -n scheduled-stop-812932: exit status 7 (62.431131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-812932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-812932
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-812932: (1.600688324s)
--- PASS: TestScheduledStopUnix (94.17s)

                                                
                                    
x
+
TestSkaffold (104.93s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1218970976 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-506053 --memory=2600 --driver=docker  --container-runtime=docker
E0924 00:19:36.045580   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-506053 --memory=2600 --driver=docker  --container-runtime=docker: (23.896034691s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1218970976 run --minikube-profile skaffold-506053 --kube-context skaffold-506053 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1218970976 run --minikube-profile skaffold-506053 --kube-context skaffold-506053 --status-check=true --port-forward=false --interactive=false: (1m4.478694101s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-68f64d67cb-2cjsd" [c791b7f3-4909-4bae-bf44-04eb0e46e319] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004048453s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-58f9fddd69-bq72k" [45b67fd1-e486-4e20-9f51-662eed391498] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.002928398s
helpers_test.go:175: Cleaning up "skaffold-506053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-506053
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-506053: (2.670880813s)
--- PASS: TestSkaffold (104.93s)

                                                
                                    
x
+
TestInsufficientStorage (12.52s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-022044 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-022044 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.394152587s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f7cbdcd0-75c0-435d-b878-fdb8f5517da6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-022044] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c93274f-93b9-4c76-8ad9-8e79ee301f5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19696"}}
	{"specversion":"1.0","id":"627c82cf-b5ea-4968-9621-a75aec568d6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b1a65022-5f0d-4be1-93f4-ca90048a822c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig"}}
	{"specversion":"1.0","id":"e012a633-b374-4a5d-8c59-542d459808fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube"}}
	{"specversion":"1.0","id":"4ae669aa-1b1f-4f18-8f7a-eace51163dcf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"78a608bd-8d21-4764-806c-fcd468ab3dc6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"64153ca4-1387-4a3c-8994-07fef1c07ec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"491b29e4-e31d-4720-9dfe-4956639a7895","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"92c91e9d-3a91-41ea-8ae7-0c45d3bf3ab3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3a0891d6-f172-4a3b-936e-521473356840","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"e0c802c8-101c-472d-9f1a-f0db1280d251","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-022044\" primary control-plane node in \"insufficient-storage-022044\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"03f6a34e-b8d6-40af-837e-9a5a725a41af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8812d19e-b765-4b73-b8a9-e232cb6ccd98","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"50147a6f-4b98-4035-a9f5-d41f9c38c202","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-022044 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-022044 --output=json --layout=cluster: exit status 7 (249.488849ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-022044","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-022044","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:21:21.926350  270719 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-022044" does not appear in /home/jenkins/minikube-integration/19696-7438/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-022044 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-022044 --output=json --layout=cluster: exit status 7 (245.329449ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-022044","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-022044","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 00:21:22.172876  270822 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-022044" does not appear in /home/jenkins/minikube-integration/19696-7438/kubeconfig
	E0924 00:21:22.182318  270822 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/insufficient-storage-022044/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-022044" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-022044
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-022044: (1.629474527s)
--- PASS: TestInsufficientStorage (12.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.728299273 start -p running-upgrade-716347 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.728299273 start -p running-upgrade-716347 --memory=2200 --vm-driver=docker  --container-runtime=docker: (31.029877222s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-716347 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-716347 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (24.867355555s)
helpers_test.go:175: Cleaning up "running-upgrade-716347" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-716347
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-716347: (2.21613873s)
--- PASS: TestRunningBinaryUpgrade (60.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (341.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-937062 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-937062 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.35258597s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-937062
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-937062: (10.709406386s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-937062 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-937062 status --format={{.Host}}: exit status 7 (94.578472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-937062 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-937062 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m31.865320995s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-937062 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-937062 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-937062 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (71.631333ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-937062] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-937062
	    minikube start -p kubernetes-upgrade-937062 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9370622 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-937062 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-937062 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-937062 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (19.393034075s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-937062" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-937062
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-937062: (2.211853873s)
--- PASS: TestKubernetesUpgrade (341.77s)

                                                
                                    
x
+
TestMissingContainerUpgrade (103.56s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1978825216 start -p missing-upgrade-887001 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1978825216 start -p missing-upgrade-887001 --memory=2200 --driver=docker  --container-runtime=docker: (36.011294919s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-887001
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-887001: (10.543109056s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-887001
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-887001 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-887001 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.14916479s)
helpers_test.go:175: Cleaning up "missing-upgrade-887001" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-887001
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-887001: (2.395441161s)
--- PASS: TestMissingContainerUpgrade (103.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-478062 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-478062 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (77.779826ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-478062] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-7438/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-7438/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestPause/serial/Start (40.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-843782 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-843782 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (40.941356501s)
--- PASS: TestPause/serial/Start (40.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (32.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-478062 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-478062 --driver=docker  --container-runtime=docker: (31.746608387s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-478062 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (32.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (146.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1647975263 start -p stopped-upgrade-494457 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0924 00:21:41.325056   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1647975263 start -p stopped-upgrade-494457 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m48.807421922s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1647975263 -p stopped-upgrade-494457 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1647975263 -p stopped-upgrade-494457 stop: (12.869974104s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-494457 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-494457 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.015429031s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (146.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-478062 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-478062 --no-kubernetes --driver=docker  --container-runtime=docker: (5.285848643s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-478062 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-478062 status -o json: exit status 2 (282.43146ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-478062","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-478062
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-478062: (1.621484148s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-478062 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-478062 --no-kubernetes --driver=docker  --container-runtime=docker: (10.114469701s)
--- PASS: TestNoKubernetes/serial/Start (10.11s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (33.58s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-843782 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-843782 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.568742164s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (33.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-478062 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-478062 "sudo systemctl is-active --quiet service kubelet": exit status 1 (274.980429ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (16.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.390438882s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (16.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-478062
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-478062: (1.250894323s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-478062 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-478062 --driver=docker  --container-runtime=docker: (7.662399686s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.66s)

                                                
                                    
x
+
TestPause/serial/Pause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-843782 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-478062 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-478062 "sudo systemctl is-active --quiet service kubelet": exit status 1 (244.882422ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.24s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-843782 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-843782 --output=json --layout=cluster: exit status 2 (293.119985ms)

                                                
                                                
-- stdout --
	{"Name":"pause-843782","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-843782","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.41s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-843782 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.41s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.6s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-843782 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.60s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.31s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-843782 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-843782 --alsologtostderr -v=5: (2.30894075s)
--- PASS: TestPause/serial/DeletePaused (2.31s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.738467796s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-843782
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-843782: exit status 1 (17.923384ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-843782: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-494457
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-494457: (1.423967624s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (133.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-003204 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0924 00:24:36.045589   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-003204 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m13.712778139s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (133.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-449330 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 00:25:57.605406   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:25:57.611806   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:25:57.623185   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:25:57.644543   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:25:57.686515   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:25:57.767934   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:25:57.929433   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:25:58.251121   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:25:58.892556   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:26:00.174178   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:26:02.736255   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-449330 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m9.915064647s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-449330 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b9a9c1d-fe26-4f22-b5e0-51c9fd713edd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0924 00:26:07.857809   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [2b9a9c1d-fe26-4f22-b5e0-51c9fd713edd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004153368s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-449330 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-449330 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-449330 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-449330 --alsologtostderr -v=3
E0924 00:26:18.099544   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-449330 --alsologtostderr -v=3: (10.862363541s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449330 -n no-preload-449330
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449330 -n no-preload-449330: exit status 7 (78.844612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-449330 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-449330 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 00:26:38.581338   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:26:41.325021   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-449330 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m23.246512735s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-449330 -n no-preload-449330
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-003204 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cdc180dc-ffdb-4e71-a209-8427313088bd] Pending
helpers_test.go:344: "busybox" [cdc180dc-ffdb-4e71-a209-8427313088bd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cdc180dc-ffdb-4e71-a209-8427313088bd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003857898s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-003204 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-003204 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-003204 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-003204 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-003204 --alsologtostderr -v=3: (10.815321546s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003204 -n old-k8s-version-003204
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003204 -n old-k8s-version-003204: exit status 7 (82.513881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-003204 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (131.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-003204 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0924 00:27:19.543426   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-003204 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m11.188930864s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-003204 -n old-k8s-version-003204
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (131.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-434216 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-434216 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (39.525009379s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-434216 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5ffa2ce3-e5ca-43c2-9e57-5b30784382b6] Pending
helpers_test.go:344: "busybox" [5ffa2ce3-e5ca-43c2-9e57-5b30784382b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5ffa2ce3-e5ca-43c2-9e57-5b30784382b6] Running
E0924 00:28:41.465217   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003226995s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-434216 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-434216 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-434216 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-434216 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-434216 --alsologtostderr -v=3: (10.74878467s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-434216 -n embed-certs-434216
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-434216 -n embed-certs-434216: exit status 7 (76.217153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-434216 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (302.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-434216 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-434216 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (5m2.585417588s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-434216 -n embed-certs-434216
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (302.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-tpbvr" [b55154a1-0c71-47d3-a0fe-a88bcf6f70cb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00366005s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-tpbvr" [b55154a1-0c71-47d3-a0fe-a88bcf6f70cb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00469395s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-003204 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-669017 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-669017 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (43.74804224s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (43.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-003204 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-003204 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003204 -n old-k8s-version-003204
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003204 -n old-k8s-version-003204: exit status 2 (273.852431ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-003204 -n old-k8s-version-003204
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-003204 -n old-k8s-version-003204: exit status 2 (306.366587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-003204 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-003204 -n old-k8s-version-003204
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-003204 -n old-k8s-version-003204
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-780821 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0924 00:29:44.396188   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-780821 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (29.117359546s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-780821 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-780821 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-780821 --alsologtostderr -v=3: (10.802982866s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-669017 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9869d2be-5184-429e-aeba-1dd4df5affe7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9869d2be-5184-429e-aeba-1dd4df5affe7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00422902s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-669017 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-780821 -n newest-cni-780821
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-780821 -n newest-cni-780821: exit status 7 (80.863056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-780821 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-780821 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-780821 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (15.199429646s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-780821 -n newest-cni-780821
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-669017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-669017 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-669017 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-669017 --alsologtostderr -v=3: (10.844335107s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-669017 -n default-k8s-diff-port-669017
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-669017 -n default-k8s-diff-port-669017: exit status 7 (87.081916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-669017 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (286.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-669017 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-669017 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m46.625944315s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-669017 -n default-k8s-diff-port-669017
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (286.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-780821 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-780821 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-780821 -n newest-cni-780821
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-780821 -n newest-cni-780821: exit status 2 (298.468169ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-780821 -n newest-cni-780821
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-780821 -n newest-cni-780821: exit status 2 (291.263826ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-780821 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-780821 -n newest-cni-780821
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-780821 -n newest-cni-780821
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (38.784191911s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bcflw" [df80323f-d8b1-4cf1-8f75-0c0681a93173] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003868311s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bcflw" [df80323f-d8b1-4cf1-8f75-0c0681a93173] Running
E0924 00:30:57.605697   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003277185s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-449330 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-449330 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-449330 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449330 -n no-preload-449330
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449330 -n no-preload-449330: exit status 2 (296.029164ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-449330 -n no-preload-449330
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-449330 -n no-preload-449330: exit status 2 (331.792514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-449330 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-449330 -n no-preload-449330
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-449330 -n no-preload-449330
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0924 00:31:06.783040   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:06.791355   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:06.802791   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:06.825025   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:06.866855   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:06.949015   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:07.111177   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:07.432775   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:08.075112   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:09.356942   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:11.918439   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (57.153492244s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-262209 "pgrep -a kubelet"
E0924 00:31:17.040425   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
I0924 00:31:17.145557   14219 config.go:182] Loaded profile config "auto-262209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-262209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wpnsg" [060d91ec-fbdf-491f-b8b5-f53b5486b61f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wpnsg" [060d91ec-fbdf-491f-b8b5-f53b5486b61f] Running
E0924 00:31:25.306861   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/skaffold-506053/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004190695s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (26.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-262209 exec deployment/netcat -- nslookup kubernetes.default
E0924 00:31:27.282382   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-262209 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.155795765s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0924 00:31:40.491424   14219 retry.go:31] will retry after 1.368441195s: exit status 1
E0924 00:31:41.324734   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/addons-537454/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Run:  kubectl --context auto-262209 exec deployment/netcat -- nslookup kubernetes.default
E0924 00:31:47.574214   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:47.580613   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:47.591990   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:47.613528   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:47.655702   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:47.737071   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:47.764449   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:47.898937   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:48.220581   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:48.862268   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:31:50.144424   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context auto-262209 exec deployment/netcat -- nslookup kubernetes.default: (10.139201872s)
--- PASS: TestNetworkPlugins/group/auto/DNS (26.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-mkp4w" [0dd10db1-fe7f-4d2c-8f44-6a58cf64d1fa] Running
E0924 00:32:08.070980   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003611336s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-262209 "pgrep -a kubelet"
I0924 00:32:10.267215   14219 config.go:182] Loaded profile config "kindnet-262209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-262209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l7ngt" [1e25befb-931a-45de-82e4-e6c37cf2221d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l7ngt" [1e25befb-931a-45de-82e4-e6c37cf2221d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00397895s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m4.354001848s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-262209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0924 00:33:09.514646   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (48.410168608s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-jtlt5" [c52f86a1-2076-437c-b128-d2880fc70e78] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004472147s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-262209 "pgrep -a kubelet"
I0924 00:33:21.008971   14219 config.go:182] Loaded profile config "calico-262209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-262209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4s2js" [c31c647c-e43e-49a7-bf35-f8647044d11f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4s2js" [c31c647c-e43e-49a7-bf35-f8647044d11f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004081499s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-262209 "pgrep -a kubelet"
I0924 00:33:29.273478   14219 config.go:182] Loaded profile config "custom-flannel-262209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-262209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7llk4" [edbde512-c511-4880-b27e-9d8d91eb48f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7llk4" [edbde512-c511-4880-b27e-9d8d91eb48f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004092395s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-262209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-262209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (65.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0924 00:33:50.648188   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/no-preload-449330/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m5.219824182s)
--- PASS: TestNetworkPlugins/group/false/Start (65.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jkpq4" [bf743567-898a-4b07-b86b-95883408edf7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004674705s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m8.946223978s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-jkpq4" [bf743567-898a-4b07-b86b-95883408edf7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004888977s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-434216 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-434216 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-434216 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-434216 -n embed-certs-434216
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-434216 -n embed-certs-434216: exit status 2 (323.493224ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-434216 -n embed-certs-434216
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-434216 -n embed-certs-434216: exit status 2 (319.061723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-434216 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-434216 -n embed-certs-434216
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-434216 -n embed-certs-434216
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (43.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0924 00:34:31.436507   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/old-k8s-version-003204/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:34:36.046578   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/functional-978241/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (43.582407077s)
--- PASS: TestNetworkPlugins/group/flannel/Start (43.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-262209 "pgrep -a kubelet"
I0924 00:34:55.860108   14219 config.go:182] Loaded profile config "false-262209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-262209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-trm2h" [039e0cd8-b925-4446-afdc-b37c8e5137dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-trm2h" [039e0cd8-b925-4446-afdc-b37c8e5137dc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.004131531s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t4lqz" [49130028-64a0-4df4-9d3b-9e5a9ae7b562] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003489816s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-262209 "pgrep -a kubelet"
I0924 00:35:04.498114   14219 config.go:182] Loaded profile config "flannel-262209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-262209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-trhf9" [ed8ae625-84f4-46db-8968-7114e8cec19b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-trhf9" [ed8ae625-84f4-46db-8968-7114e8cec19b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004189387s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-262209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-262209 "pgrep -a kubelet"
I0924 00:35:08.202247   14219 config.go:182] Loaded profile config "enable-default-cni-262209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-262209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8cc7t" [25f0d299-9339-4538-980b-f32627e879c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8cc7t" [25f0d299-9339-4538-980b-f32627e879c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004520014s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-262209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-262209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dlkq6" [0860209d-e378-4c7e-8705-ac3b74de62d8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003797983s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (43.953852348s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dlkq6" [0860209d-e378-4c7e-8705-ac3b74de62d8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005708114s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-669017 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-669017 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-669017 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-669017 -n default-k8s-diff-port-669017
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-669017 -n default-k8s-diff-port-669017: exit status 2 (419.020047ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-669017 -n default-k8s-diff-port-669017
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-669017 -n default-k8s-diff-port-669017: exit status 2 (422.939737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-669017 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-669017 -n default-k8s-diff-port-669017
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-669017 -n default-k8s-diff-port-669017
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (33.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-262209 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (33.130248503s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (33.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-262209 "pgrep -a kubelet"
I0924 00:36:07.904100   14219 config.go:182] Loaded profile config "bridge-262209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-262209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wpt6c" [39663e52-d592-4e57-a5b9-9bb0e51b4f5c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wpt6c" [39663e52-d592-4e57-a5b9-9bb0e51b4f5c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004179515s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-262209 "pgrep -a kubelet"
I0924 00:36:09.123574   14219 config.go:182] Loaded profile config "kubenet-262209": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-262209 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-46c48" [79c60446-f908-4ca7-be1b-29123d715515] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-46c48" [79c60446-f908-4ca7-be1b-29123d715515] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003944953s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-262209 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0924 00:36:17.323283   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/auto-262209/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:36:17.329636   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/auto-262209/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:36:17.341004   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/auto-262209/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:36:17.362404   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/auto-262209/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:36:17.404229   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/auto-262209/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (21.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-262209 exec deployment/netcat -- nslookup kubernetes.default
E0924 00:36:19.893894   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/auto-262209/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:36:22.455378   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/auto-262209/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:36:27.577006   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/auto-262209/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-262209 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.125267018s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0924 00:36:34.499319   14219 retry.go:31] will retry after 774.315004ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context kubenet-262209 exec deployment/netcat -- nslookup kubernetes.default
E0924 00:36:37.818783   14219 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/auto-262209/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context kubenet-262209 exec deployment/netcat -- nslookup kubernetes.default: (5.121416161s)
--- PASS: TestNetworkPlugins/group/kubenet/DNS (21.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-262209 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    

Test skip (20/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-121695" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-121695
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-262209 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-262209" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19696-7438/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Sep 2024 00:23:16 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-887001
contexts:
- context:
cluster: missing-upgrade-887001
extensions:
- extension:
last-update: Tue, 24 Sep 2024 00:23:16 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-887001
name: missing-upgrade-887001
current-context: missing-upgrade-887001
kind: Config
preferences: {}
users:
- name: missing-upgrade-887001
user:
client-certificate: /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/missing-upgrade-887001/client.crt
client-key: /home/jenkins/minikube-integration/19696-7438/.minikube/profiles/missing-upgrade-887001/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-262209

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-262209" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-262209"

                                                
                                                
----------------------- debugLogs end: cilium-262209 [took: 3.851004583s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-262209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-262209
--- SKIP: TestNetworkPlugins/group/cilium (3.99s)

                                                
                                    
Copied to clipboard