Test Report: Docker_macOS 19672

                    
                      d6d2a37830b251a8a712eec07ee86a534797346d:2024-09-20:36302
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 73.34
x
+
TestAddons/parallel/Registry (73.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.050643ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-lnt2v" [78e593b3-9f6d-4e81-a44a-8d0c99ad1e53] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003653644s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-t5vth" [2c21fbde-2aec-4dbb-b6db-fbc24c448343] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006869002s
addons_test.go:338: (dbg) Run:  kubectl --context addons-918000 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-918000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-918000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.067884746s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-918000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:353: Unable to complete rest of the test due to connectivity assumptions
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-918000
helpers_test.go:235: (dbg) docker inspect addons-918000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b3ebaee12db2d606bea31a82274f662c065ceb23d5045bdc2e38f25b763211fa",
	        "Created": "2024-09-20T22:42:31.54399193Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366819,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T22:42:31.662134801Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/b3ebaee12db2d606bea31a82274f662c065ceb23d5045bdc2e38f25b763211fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b3ebaee12db2d606bea31a82274f662c065ceb23d5045bdc2e38f25b763211fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/b3ebaee12db2d606bea31a82274f662c065ceb23d5045bdc2e38f25b763211fa/hosts",
	        "LogPath": "/var/lib/docker/containers/b3ebaee12db2d606bea31a82274f662c065ceb23d5045bdc2e38f25b763211fa/b3ebaee12db2d606bea31a82274f662c065ceb23d5045bdc2e38f25b763211fa-json.log",
	        "Name": "/addons-918000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-918000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-918000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/71469a6ebe0397969f6a5c7e04e3e6603016a6abd99521145928391a8ca480a3-init/diff:/var/lib/docker/overlay2/6f7861f55c9afd4e02c04e1b37094da164df17cdc580ee3ad7065f4205956dc5/diff",
	                "MergedDir": "/var/lib/docker/overlay2/71469a6ebe0397969f6a5c7e04e3e6603016a6abd99521145928391a8ca480a3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/71469a6ebe0397969f6a5c7e04e3e6603016a6abd99521145928391a8ca480a3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/71469a6ebe0397969f6a5c7e04e3e6603016a6abd99521145928391a8ca480a3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-918000",
	                "Source": "/var/lib/docker/volumes/addons-918000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-918000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-918000",
	                "name.minikube.sigs.k8s.io": "addons-918000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bbae59028159a0c50a6546f458063359bb32443fa8d0894a1d827c97e2699859",
	            "SandboxKey": "/var/run/docker/netns/bbae59028159",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61060"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61061"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61063"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "61059"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-918000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ef51ed80583f9b086fcbeacfaa81787bba34f87ee3455b860eadbb6d322314be",
	                    "EndpointID": "c92cd5d4f3286bb637d8f37e41333b9a77f995686405c49bc0a03ba5cab930e6",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-918000",
	                        "b3ebaee12db2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p addons-918000 -n addons-918000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p addons-918000 logs -n 25: (2.439715707s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-642000   | jenkins | v1.34.0 | 20 Sep 24 15:41 PDT |                     |
	|         | -p download-only-642000                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 20 Sep 24 15:41 PDT | 20 Sep 24 15:41 PDT |
	| delete  | -p download-only-642000                                                                     | download-only-642000   | jenkins | v1.34.0 | 20 Sep 24 15:41 PDT | 20 Sep 24 15:41 PDT |
	| start   | -o=json --download-only                                                                     | download-only-926000   | jenkins | v1.34.0 | 20 Sep 24 15:41 PDT |                     |
	|         | -p download-only-926000                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT | 20 Sep 24 15:42 PDT |
	| delete  | -p download-only-926000                                                                     | download-only-926000   | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT | 20 Sep 24 15:42 PDT |
	| delete  | -p download-only-642000                                                                     | download-only-642000   | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT | 20 Sep 24 15:42 PDT |
	| delete  | -p download-only-926000                                                                     | download-only-926000   | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT | 20 Sep 24 15:42 PDT |
	| start   | --download-only -p                                                                          | download-docker-090000 | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT |                     |
	|         | download-docker-090000                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	| delete  | -p download-docker-090000                                                                   | download-docker-090000 | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT | 20 Sep 24 15:42 PDT |
	| start   | --download-only -p                                                                          | binary-mirror-929000   | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT |                     |
	|         | binary-mirror-929000                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:61048                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-929000                                                                     | binary-mirror-929000   | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT | 20 Sep 24 15:42 PDT |
	| addons  | enable dashboard -p                                                                         | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT |                     |
	|         | addons-918000                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT |                     |
	|         | addons-918000                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-918000 --wait=true                                                                | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:42 PDT | 20 Sep 24 15:45 PDT |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker  --addons=ingress                                                           |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-918000 addons disable                                                                | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:46 PDT | 20 Sep 24 15:46 PDT |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:54 PDT | 20 Sep 24 15:54 PDT |
	|         | -p addons-918000                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-918000 addons disable                                                                | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:54 PDT | 20 Sep 24 15:54 PDT |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-918000 addons disable                                                                | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:54 PDT | 20 Sep 24 15:54 PDT |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:54 PDT | 20 Sep 24 15:55 PDT |
	|         | -p addons-918000                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-918000 ssh cat                                                                       | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:55 PDT | 20 Sep 24 15:55 PDT |
	|         | /opt/local-path-provisioner/pvc-1e953f3f-0f81-401a-ab56-c6fd2854bea4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-918000 addons disable                                                                | addons-918000          | jenkins | v1.34.0 | 20 Sep 24 15:55 PDT |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 15:42:08
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 15:42:08.373303   41026 out.go:345] Setting OutFile to fd 1 ...
	I0920 15:42:08.373569   41026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 15:42:08.373574   41026 out.go:358] Setting ErrFile to fd 2...
	I0920 15:42:08.373577   41026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 15:42:08.373749   41026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
	I0920 15:42:08.375260   41026 out.go:352] Setting JSON to false
	I0920 15:42:08.397497   41026 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":22291,"bootTime":1726849837,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 15:42:08.397640   41026 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 15:42:08.419729   41026 out.go:177] * [addons-918000] minikube v1.34.0 on Darwin 14.6.1
	I0920 15:42:08.461261   41026 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 15:42:08.461342   41026 notify.go:220] Checking for updates...
	I0920 15:42:08.503452   41026 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	I0920 15:42:08.524240   41026 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 15:42:08.545201   41026 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 15:42:08.566319   41026 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	I0920 15:42:08.587299   41026 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 15:42:08.608684   41026 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 15:42:08.633806   41026 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0920 15:42:08.633973   41026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 15:42:08.719381   41026 info.go:266] docker info: {ID:5cf611e6-fa9d-4ecb-b0dd-438e8c824220 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:75 SystemTime:2024-09-20 22:42:08.710783821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:11 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:8220102656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0920 15:42:08.761141   41026 out.go:177] * Using the docker driver based on user configuration
	I0920 15:42:08.782201   41026 start.go:297] selected driver: docker
	I0920 15:42:08.782233   41026 start.go:901] validating driver "docker" against <nil>
	I0920 15:42:08.782250   41026 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 15:42:08.786743   41026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 15:42:08.865131   41026 info.go:266] docker info: {ID:5cf611e6-fa9d-4ecb-b0dd-438e8c824220 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:75 SystemTime:2024-09-20 22:42:08.856676442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:11 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:8220102656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0920 15:42:08.865321   41026 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 15:42:08.865524   41026 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 15:42:08.886353   41026 out.go:177] * Using Docker Desktop driver with root privileges
	I0920 15:42:08.907238   41026 cni.go:84] Creating CNI manager for ""
	I0920 15:42:08.907353   41026 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 15:42:08.907362   41026 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 15:42:08.907467   41026 start.go:340] cluster config:
	{Name:addons-918000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 15:42:08.929322   41026 out.go:177] * Starting "addons-918000" primary control-plane node in "addons-918000" cluster
	I0920 15:42:08.971201   41026 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 15:42:08.992313   41026 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0920 15:42:09.034321   41026 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 15:42:09.034382   41026 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 15:42:09.034397   41026 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 15:42:09.034419   41026 cache.go:56] Caching tarball of preloaded images
	I0920 15:42:09.034634   41026 preload.go:172] Found /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0920 15:42:09.034657   41026 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0920 15:42:09.036143   41026 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/config.json ...
	I0920 15:42:09.036276   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/config.json: {Name:mkf62295f07b8ffc85c8015f44ea2a67ed0b8fd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:09.053106   41026 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 15:42:09.053456   41026 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 15:42:09.053478   41026 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 15:42:09.053484   41026 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 15:42:09.053492   41026 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 15:42:09.053497   41026 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0920 15:42:27.876630   41026 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0920 15:42:27.876680   41026 cache.go:194] Successfully downloaded all kic artifacts
	I0920 15:42:27.876737   41026 start.go:360] acquireMachinesLock for addons-918000: {Name:mk70fe7bd98b46a65826cb975a52310265b5d893 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 15:42:27.876950   41026 start.go:364] duration metric: took 200.482µs to acquireMachinesLock for "addons-918000"
	I0920 15:42:27.876982   41026 start.go:93] Provisioning new machine with config: &{Name:addons-918000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-918000 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 15:42:27.877038   41026 start.go:125] createHost starting for "" (driver="docker")
	I0920 15:42:27.902184   41026 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 15:42:27.902400   41026 start.go:159] libmachine.API.Create for "addons-918000" (driver="docker")
	I0920 15:42:27.902425   41026 client.go:168] LocalClient.Create starting
	I0920 15:42:27.923324   41026 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/ca.pem
	I0920 15:42:28.067370   41026 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/cert.pem
	I0920 15:42:28.363217   41026 cli_runner.go:164] Run: docker network inspect addons-918000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 15:42:28.399780   41026 cli_runner.go:211] docker network inspect addons-918000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 15:42:28.399902   41026 network_create.go:284] running [docker network inspect addons-918000] to gather additional debugging logs...
	I0920 15:42:28.399921   41026 cli_runner.go:164] Run: docker network inspect addons-918000
	W0920 15:42:28.417339   41026 cli_runner.go:211] docker network inspect addons-918000 returned with exit code 1
	I0920 15:42:28.417367   41026 network_create.go:287] error running [docker network inspect addons-918000]: docker network inspect addons-918000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-918000 not found
	I0920 15:42:28.417379   41026 network_create.go:289] output of [docker network inspect addons-918000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-918000 not found
	
	** /stderr **
	I0920 15:42:28.417522   41026 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 15:42:28.435768   41026 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00163a170}
	I0920 15:42:28.435805   41026 network_create.go:124] attempt to create docker network addons-918000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0920 15:42:28.435890   41026 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-918000 addons-918000
	I0920 15:42:28.498618   41026 network_create.go:108] docker network addons-918000 192.168.49.0/24 created
	I0920 15:42:28.498661   41026 kic.go:121] calculated static IP "192.168.49.2" for the "addons-918000" container
	I0920 15:42:28.498818   41026 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 15:42:28.516591   41026 cli_runner.go:164] Run: docker volume create addons-918000 --label name.minikube.sigs.k8s.io=addons-918000 --label created_by.minikube.sigs.k8s.io=true
	I0920 15:42:28.534831   41026 oci.go:103] Successfully created a docker volume addons-918000
	I0920 15:42:28.535019   41026 cli_runner.go:164] Run: docker run --rm --name addons-918000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-918000 --entrypoint /usr/bin/test -v addons-918000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0920 15:42:29.253501   41026 oci.go:107] Successfully prepared a docker volume addons-918000
	I0920 15:42:29.253553   41026 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 15:42:29.253572   41026 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 15:42:29.253757   41026 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-918000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 15:42:31.440255   41026 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-918000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (2.186401245s)
	I0920 15:42:31.440296   41026 kic.go:203] duration metric: took 2.186698749s to extract preloaded images to volume ...
	I0920 15:42:31.440456   41026 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 15:42:31.524214   41026 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-918000 --name addons-918000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-918000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-918000 --network addons-918000 --ip 192.168.49.2 --volume addons-918000:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0920 15:42:31.799453   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Running}}
	I0920 15:42:31.819606   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:31.839789   41026 cli_runner.go:164] Run: docker exec addons-918000 stat /var/lib/dpkg/alternatives/iptables
	I0920 15:42:31.900114   41026 oci.go:144] the created container "addons-918000" has a running status.
	I0920 15:42:31.900159   41026 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa...
	I0920 15:42:32.085417   41026 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 15:42:32.151895   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:32.176543   41026 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 15:42:32.176602   41026 kic_runner.go:114] Args: [docker exec --privileged addons-918000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 15:42:32.245726   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:32.267934   41026 machine.go:93] provisionDockerMachine start ...
	I0920 15:42:32.268156   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:32.291531   41026 main.go:141] libmachine: Using SSH client type: native
	I0920 15:42:32.291749   41026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69e2d00] 0x69e59e0 <nil>  [] 0s} 127.0.0.1 61060 <nil> <nil>}
	I0920 15:42:32.291757   41026 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 15:42:32.423467   41026 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-918000
	
	I0920 15:42:32.423494   41026 ubuntu.go:169] provisioning hostname "addons-918000"
	I0920 15:42:32.423593   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:32.443888   41026 main.go:141] libmachine: Using SSH client type: native
	I0920 15:42:32.444071   41026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69e2d00] 0x69e59e0 <nil>  [] 0s} 127.0.0.1 61060 <nil> <nil>}
	I0920 15:42:32.444087   41026 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-918000 && echo "addons-918000" | sudo tee /etc/hostname
	I0920 15:42:32.584188   41026 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-918000
	
	I0920 15:42:32.584298   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:32.603269   41026 main.go:141] libmachine: Using SSH client type: native
	I0920 15:42:32.603439   41026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69e2d00] 0x69e59e0 <nil>  [] 0s} 127.0.0.1 61060 <nil> <nil>}
	I0920 15:42:32.603451   41026 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-918000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-918000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-918000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 15:42:32.731952   41026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 15:42:32.731975   41026 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/19672-40263/.minikube CaCertPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19672-40263/.minikube}
	I0920 15:42:32.731995   41026 ubuntu.go:177] setting up certificates
	I0920 15:42:32.732011   41026 provision.go:84] configureAuth start
	I0920 15:42:32.732110   41026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-918000
	I0920 15:42:32.750157   41026 provision.go:143] copyHostCerts
	I0920 15:42:32.750266   41026 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19672-40263/.minikube/ca.pem (1078 bytes)
	I0920 15:42:32.750508   41026 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19672-40263/.minikube/cert.pem (1123 bytes)
	I0920 15:42:32.750679   41026 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19672-40263/.minikube/key.pem (1675 bytes)
	I0920 15:42:32.750858   41026 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19672-40263/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19672-40263/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19672-40263/.minikube/certs/ca-key.pem org=jenkins.addons-918000 san=[127.0.0.1 192.168.49.2 addons-918000 localhost minikube]
	I0920 15:42:32.946857   41026 provision.go:177] copyRemoteCerts
	I0920 15:42:32.946943   41026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 15:42:32.947010   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:32.965676   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:33.056766   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 15:42:33.078782   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 15:42:33.099395   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I0920 15:42:33.120110   41026 provision.go:87] duration metric: took 388.078085ms to configureAuth
	I0920 15:42:33.120131   41026 ubuntu.go:193] setting minikube options for container-runtime
	I0920 15:42:33.120309   41026 config.go:182] Loaded profile config "addons-918000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 15:42:33.120404   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:33.138929   41026 main.go:141] libmachine: Using SSH client type: native
	I0920 15:42:33.139121   41026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69e2d00] 0x69e59e0 <nil>  [] 0s} 127.0.0.1 61060 <nil> <nil>}
	I0920 15:42:33.139137   41026 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0920 15:42:33.268977   41026 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0920 15:42:33.268995   41026 ubuntu.go:71] root file system type: overlay
	I0920 15:42:33.269080   41026 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0920 15:42:33.269187   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:33.287885   41026 main.go:141] libmachine: Using SSH client type: native
	I0920 15:42:33.288058   41026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69e2d00] 0x69e59e0 <nil>  [] 0s} 127.0.0.1 61060 <nil> <nil>}
	I0920 15:42:33.288115   41026 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0920 15:42:33.426978   41026 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0920 15:42:33.427108   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:33.446504   41026 main.go:141] libmachine: Using SSH client type: native
	I0920 15:42:33.446682   41026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x69e2d00] 0x69e59e0 <nil>  [] 0s} 127.0.0.1 61060 <nil> <nil>}
	I0920 15:42:33.446694   41026 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0920 15:42:34.248183   41026 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-19 14:24:32.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-20 22:42:33.424939311 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0920 15:42:34.248210   41026 machine.go:96] duration metric: took 1.980231814s to provisionDockerMachine
	I0920 15:42:34.248218   41026 client.go:171] duration metric: took 6.345739896s to LocalClient.Create
	I0920 15:42:34.248236   41026 start.go:167] duration metric: took 6.345787717s to libmachine.API.Create "addons-918000"
	I0920 15:42:34.248247   41026 start.go:293] postStartSetup for "addons-918000" (driver="docker")
	I0920 15:42:34.248257   41026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 15:42:34.248352   41026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 15:42:34.248431   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:34.267187   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:34.364301   41026 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 15:42:34.369012   41026 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 15:42:34.369049   41026 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 15:42:34.369059   41026 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 15:42:34.369064   41026 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 15:42:34.369075   41026 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-40263/.minikube/addons for local assets ...
	I0920 15:42:34.369203   41026 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19672-40263/.minikube/files for local assets ...
	I0920 15:42:34.369254   41026 start.go:296] duration metric: took 121.000857ms for postStartSetup
	I0920 15:42:34.369837   41026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-918000
	I0920 15:42:34.388406   41026 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/config.json ...
	I0920 15:42:34.388907   41026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 15:42:34.388987   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:34.408077   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:34.497542   41026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 15:42:34.502488   41026 start.go:128] duration metric: took 6.625389064s to createHost
	I0920 15:42:34.502505   41026 start.go:83] releasing machines lock for "addons-918000", held for 6.625497256s
	I0920 15:42:34.502597   41026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-918000
	I0920 15:42:34.521505   41026 ssh_runner.go:195] Run: cat /version.json
	I0920 15:42:34.521522   41026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 15:42:34.521597   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:34.521614   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:34.542119   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:34.542151   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:34.632394   41026 ssh_runner.go:195] Run: systemctl --version
	I0920 15:42:34.718588   41026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 15:42:34.724064   41026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 15:42:34.746661   41026 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 15:42:34.746735   41026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 15:42:34.771732   41026 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0920 15:42:34.771749   41026 start.go:495] detecting cgroup driver to use...
	I0920 15:42:34.771770   41026 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 15:42:34.771880   41026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 15:42:34.787338   41026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 15:42:34.796953   41026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 15:42:34.807070   41026 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 15:42:34.807143   41026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 15:42:34.816940   41026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 15:42:34.826853   41026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 15:42:34.836518   41026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 15:42:34.845884   41026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 15:42:34.854695   41026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 15:42:34.865617   41026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 15:42:34.875074   41026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 15:42:34.884825   41026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 15:42:34.893022   41026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 15:42:34.902071   41026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 15:42:34.959704   41026 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 15:42:35.039607   41026 start.go:495] detecting cgroup driver to use...
	I0920 15:42:35.039651   41026 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 15:42:35.039735   41026 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0920 15:42:35.060482   41026 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0920 15:42:35.060559   41026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 15:42:35.075697   41026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 15:42:35.092334   41026 ssh_runner.go:195] Run: which cri-dockerd
	I0920 15:42:35.096824   41026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0920 15:42:35.106441   41026 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0920 15:42:35.130864   41026 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0920 15:42:35.195240   41026 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0920 15:42:35.256136   41026 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0920 15:42:35.256243   41026 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0920 15:42:35.274908   41026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 15:42:35.335167   41026 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0920 15:42:35.774102   41026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0920 15:42:35.787066   41026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 15:42:35.800427   41026 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0920 15:42:35.867963   41026 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0920 15:42:35.932706   41026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 15:42:35.992053   41026 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0920 15:42:36.017415   41026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0920 15:42:36.029480   41026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 15:42:36.094212   41026 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0920 15:42:36.181954   41026 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0920 15:42:36.182098   41026 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0920 15:42:36.187364   41026 start.go:563] Will wait 60s for crictl version
	I0920 15:42:36.187454   41026 ssh_runner.go:195] Run: which crictl
	I0920 15:42:36.193030   41026 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 15:42:36.232557   41026 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.0
	RuntimeApiVersion:  v1
	I0920 15:42:36.232669   41026 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 15:42:36.259264   41026 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0920 15:42:36.335242   41026 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.0 ...
	I0920 15:42:36.335359   41026 cli_runner.go:164] Run: docker exec -t addons-918000 dig +short host.docker.internal
	I0920 15:42:36.421022   41026 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0920 15:42:36.421150   41026 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0920 15:42:36.425879   41026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 15:42:36.437806   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:36.458289   41026 kubeadm.go:883] updating cluster {Name:addons-918000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 15:42:36.458396   41026 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 15:42:36.458481   41026 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 15:42:36.480698   41026 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 15:42:36.480722   41026 docker.go:615] Images already preloaded, skipping extraction
	I0920 15:42:36.480850   41026 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0920 15:42:36.500580   41026 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0920 15:42:36.500622   41026 cache_images.go:84] Images are preloaded, skipping loading
	I0920 15:42:36.500643   41026 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0920 15:42:36.500747   41026 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-918000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 15:42:36.500851   41026 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0920 15:42:36.546340   41026 cni.go:84] Creating CNI manager for ""
	I0920 15:42:36.546359   41026 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 15:42:36.546374   41026 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 15:42:36.546388   41026 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-918000 NodeName:addons-918000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 15:42:36.546485   41026 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-918000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 15:42:36.546560   41026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 15:42:36.557247   41026 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 15:42:36.557322   41026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 15:42:36.567267   41026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0920 15:42:36.584040   41026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 15:42:36.604801   41026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0920 15:42:36.624304   41026 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 15:42:36.628586   41026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 15:42:36.639692   41026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 15:42:36.695785   41026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 15:42:36.723148   41026 certs.go:68] Setting up /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000 for IP: 192.168.49.2
	I0920 15:42:36.723168   41026 certs.go:194] generating shared ca certs ...
	I0920 15:42:36.723184   41026 certs.go:226] acquiring lock for ca certs: {Name:mkf22a06a4269296b71ffa6a62f442e841b6c8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:36.724078   41026 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19672-40263/.minikube/ca.key
	I0920 15:42:36.821535   41026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-40263/.minikube/ca.crt ...
	I0920 15:42:36.821558   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/ca.crt: {Name:mke439c923c73c77ebd0977b2d5b430244747fcd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:36.821966   41026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-40263/.minikube/ca.key ...
	I0920 15:42:36.821975   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/ca.key: {Name:mk2b6b8cf162a7a671fd83da53f320f918ebcc8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:36.822219   41026 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19672-40263/.minikube/proxy-client-ca.key
	I0920 15:42:37.031840   41026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-40263/.minikube/proxy-client-ca.crt ...
	I0920 15:42:37.031859   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/proxy-client-ca.crt: {Name:mkf69fb66eafd8677e0ab20d414f6856cd22621a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:37.032913   41026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-40263/.minikube/proxy-client-ca.key ...
	I0920 15:42:37.032923   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/proxy-client-ca.key: {Name:mk98c5b56b2eee8f2e3a01ad7fb4a5b2e6dc7fa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:37.033137   41026 certs.go:256] generating profile certs ...
	I0920 15:42:37.033194   41026 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.key
	I0920 15:42:37.033208   41026 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt with IP's: []
	I0920 15:42:37.163214   41026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt ...
	I0920 15:42:37.163235   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: {Name:mk5f75a481db83bbc4770ed00a622e71681fa0dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:37.163584   41026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.key ...
	I0920 15:42:37.163592   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.key: {Name:mka54a98bbc1068f9d29275be6014af3723458af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:37.163819   41026 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.key.59fa6cf4
	I0920 15:42:37.163840   41026 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.crt.59fa6cf4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 15:42:37.250263   41026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.crt.59fa6cf4 ...
	I0920 15:42:37.250279   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.crt.59fa6cf4: {Name:mk9b6fa99a3c81ea77779ee4b3d244f03beca97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:37.250611   41026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.key.59fa6cf4 ...
	I0920 15:42:37.250620   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.key.59fa6cf4: {Name:mkdea8dccbaab56fccdcca14fdce98fef8d18008 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:37.250856   41026 certs.go:381] copying /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.crt.59fa6cf4 -> /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.crt
	I0920 15:42:37.251039   41026 certs.go:385] copying /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.key.59fa6cf4 -> /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.key
	I0920 15:42:37.251204   41026 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/proxy-client.key
	I0920 15:42:37.251225   41026 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/proxy-client.crt with IP's: []
	I0920 15:42:37.294081   41026 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/proxy-client.crt ...
	I0920 15:42:37.294092   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/proxy-client.crt: {Name:mk8b2da8c9da6e32c51e0e3f83e8f303230709b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:37.294365   41026 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/proxy-client.key ...
	I0920 15:42:37.294374   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/proxy-client.key: {Name:mke3eb05692db93e7c81374bffc8623d04b2674e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:37.295315   41026 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 15:42:37.295359   41026 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/ca.pem (1078 bytes)
	I0920 15:42:37.295391   41026 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/cert.pem (1123 bytes)
	I0920 15:42:37.295424   41026 certs.go:484] found cert: /Users/jenkins/minikube-integration/19672-40263/.minikube/certs/key.pem (1675 bytes)
	I0920 15:42:37.296002   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 15:42:37.317276   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 15:42:37.338024   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 15:42:37.358882   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0920 15:42:37.379494   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 15:42:37.400129   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 15:42:37.421357   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 15:42:37.443704   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 15:42:37.466657   41026 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19672-40263/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 15:42:37.489939   41026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 15:42:37.506357   41026 ssh_runner.go:195] Run: openssl version
	I0920 15:42:37.511738   41026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 15:42:37.520779   41026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 15:42:37.524587   41026 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 22:42 /usr/share/ca-certificates/minikubeCA.pem
	I0920 15:42:37.524649   41026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 15:42:37.531048   41026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 15:42:37.539982   41026 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 15:42:37.543723   41026 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 15:42:37.543774   41026 kubeadm.go:392] StartCluster: {Name:addons-918000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-918000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 15:42:37.543898   41026 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0920 15:42:37.559793   41026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 15:42:37.568159   41026 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 15:42:37.576446   41026 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 15:42:37.576513   41026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 15:42:37.584563   41026 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 15:42:37.584574   41026 kubeadm.go:157] found existing configuration files:
	
	I0920 15:42:37.584631   41026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 15:42:37.592825   41026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 15:42:37.592888   41026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 15:42:37.601074   41026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 15:42:37.609042   41026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 15:42:37.609116   41026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 15:42:37.617268   41026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 15:42:37.625485   41026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 15:42:37.625547   41026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 15:42:37.633419   41026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 15:42:37.641519   41026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 15:42:37.641586   41026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 15:42:37.649840   41026 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 15:42:37.682026   41026 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 15:42:37.682074   41026 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 15:42:37.754862   41026 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 15:42:37.754987   41026 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 15:42:37.755107   41026 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 15:42:37.764917   41026 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 15:42:37.813254   41026 out.go:235]   - Generating certificates and keys ...
	I0920 15:42:37.813333   41026 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 15:42:37.813395   41026 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 15:42:38.075863   41026 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 15:42:38.337850   41026 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 15:42:38.479378   41026 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 15:42:38.587547   41026 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 15:42:38.708490   41026 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 15:42:38.708588   41026 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-918000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 15:42:39.178398   41026 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 15:42:39.178642   41026 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-918000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 15:42:39.286773   41026 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 15:42:39.351651   41026 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 15:42:39.400092   41026 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 15:42:39.400188   41026 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 15:42:39.594780   41026 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 15:42:39.963073   41026 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 15:42:40.210350   41026 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 15:42:40.364462   41026 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 15:42:40.408411   41026 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 15:42:40.408795   41026 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 15:42:40.410614   41026 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 15:42:40.431199   41026 out.go:235]   - Booting up control plane ...
	I0920 15:42:40.431287   41026 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 15:42:40.431360   41026 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 15:42:40.431417   41026 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 15:42:40.431499   41026 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 15:42:40.431578   41026 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 15:42:40.431616   41026 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 15:42:40.504434   41026 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 15:42:40.504532   41026 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 15:42:41.006823   41026 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.51125ms
	I0920 15:42:41.006920   41026 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 15:42:45.509591   41026 kubeadm.go:310] [api-check] The API server is healthy after 4.502752612s
	I0920 15:42:45.520669   41026 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 15:42:45.530483   41026 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 15:42:45.547032   41026 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 15:42:45.547189   41026 kubeadm.go:310] [mark-control-plane] Marking the node addons-918000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 15:42:45.553858   41026 kubeadm.go:310] [bootstrap-token] Using token: blkqwl.jnt77i8suoqx0rgb
	I0920 15:42:45.592791   41026 out.go:235]   - Configuring RBAC rules ...
	I0920 15:42:45.592924   41026 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 15:42:45.594723   41026 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 15:42:45.634966   41026 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 15:42:45.637378   41026 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 15:42:45.639728   41026 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 15:42:45.642556   41026 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 15:42:45.918115   41026 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 15:42:46.329804   41026 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 15:42:46.917686   41026 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 15:42:46.918231   41026 kubeadm.go:310] 
	I0920 15:42:46.918314   41026 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 15:42:46.918331   41026 kubeadm.go:310] 
	I0920 15:42:46.918408   41026 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 15:42:46.918414   41026 kubeadm.go:310] 
	I0920 15:42:46.918434   41026 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 15:42:46.918486   41026 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 15:42:46.918539   41026 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 15:42:46.918546   41026 kubeadm.go:310] 
	I0920 15:42:46.918594   41026 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 15:42:46.918598   41026 kubeadm.go:310] 
	I0920 15:42:46.918632   41026 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 15:42:46.918636   41026 kubeadm.go:310] 
	I0920 15:42:46.918680   41026 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 15:42:46.918740   41026 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 15:42:46.918794   41026 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 15:42:46.918803   41026 kubeadm.go:310] 
	I0920 15:42:46.918870   41026 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 15:42:46.918939   41026 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 15:42:46.918948   41026 kubeadm.go:310] 
	I0920 15:42:46.919019   41026 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token blkqwl.jnt77i8suoqx0rgb \
	I0920 15:42:46.919113   41026 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91731b946f5766d1938a3a5b3d68726fd371027a74cbf3d1e966c3963c47bb4d \
	I0920 15:42:46.919131   41026 kubeadm.go:310] 	--control-plane 
	I0920 15:42:46.919137   41026 kubeadm.go:310] 
	I0920 15:42:46.919209   41026 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 15:42:46.919216   41026 kubeadm.go:310] 
	I0920 15:42:46.919288   41026 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token blkqwl.jnt77i8suoqx0rgb \
	I0920 15:42:46.919369   41026 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:91731b946f5766d1938a3a5b3d68726fd371027a74cbf3d1e966c3963c47bb4d 
	I0920 15:42:46.920857   41026 kubeadm.go:310] W0920 22:42:37.679744    1821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 15:42:46.921131   41026 kubeadm.go:310] W0920 22:42:37.680197    1821 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 15:42:46.921303   41026 kubeadm.go:310] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I0920 15:42:46.921394   41026 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 15:42:46.921413   41026 cni.go:84] Creating CNI manager for ""
	I0920 15:42:46.921425   41026 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 15:42:46.960228   41026 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0920 15:42:46.982830   41026 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0920 15:42:46.992303   41026 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0920 15:42:47.008363   41026 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 15:42:47.008463   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:47.008467   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-918000 minikube.k8s.io/updated_at=2024_09_20T15_42_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f minikube.k8s.io/name=addons-918000 minikube.k8s.io/primary=true
	I0920 15:42:47.084216   41026 ops.go:34] apiserver oom_adj: -16
	I0920 15:42:47.084356   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:47.585458   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:48.084893   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:48.585040   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:49.084457   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:49.584693   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:50.085090   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:50.584831   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:51.085626   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:51.585019   41026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 15:42:51.646361   41026 kubeadm.go:1113] duration metric: took 4.637944105s to wait for elevateKubeSystemPrivileges
	I0920 15:42:51.646386   41026 kubeadm.go:394] duration metric: took 14.102509446s to StartCluster
	I0920 15:42:51.646401   41026 settings.go:142] acquiring lock: {Name:mkceaee88e772f60255917a5490a535f9cba0535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:51.646591   41026 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19672-40263/kubeconfig
	I0920 15:42:51.646870   41026 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/kubeconfig: {Name:mk5cb0fda72057e63d71ce47a0fadd6f738d4b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:42:51.647490   41026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 15:42:51.647509   41026 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0920 15:42:51.647536   41026 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 15:42:51.647608   41026 addons.go:69] Setting yakd=true in profile "addons-918000"
	I0920 15:42:51.647612   41026 addons.go:69] Setting inspektor-gadget=true in profile "addons-918000"
	I0920 15:42:51.647627   41026 addons.go:234] Setting addon yakd=true in "addons-918000"
	I0920 15:42:51.647635   41026 addons.go:69] Setting storage-provisioner=true in profile "addons-918000"
	I0920 15:42:51.647642   41026 addons.go:69] Setting ingress=true in profile "addons-918000"
	I0920 15:42:51.647640   41026 addons.go:69] Setting default-storageclass=true in profile "addons-918000"
	I0920 15:42:51.647656   41026 config.go:182] Loaded profile config "addons-918000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 15:42:51.647628   41026 addons.go:234] Setting addon inspektor-gadget=true in "addons-918000"
	I0920 15:42:51.647673   41026 addons.go:69] Setting volcano=true in profile "addons-918000"
	I0920 15:42:51.647699   41026 addons.go:69] Setting metrics-server=true in profile "addons-918000"
	I0920 15:42:51.647682   41026 addons.go:69] Setting cloud-spanner=true in profile "addons-918000"
	I0920 15:42:51.647703   41026 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-918000"
	I0920 15:42:51.647696   41026 addons.go:69] Setting ingress-dns=true in profile "addons-918000"
	I0920 15:42:51.647714   41026 addons.go:234] Setting addon volcano=true in "addons-918000"
	I0920 15:42:51.647717   41026 addons.go:234] Setting addon metrics-server=true in "addons-918000"
	I0920 15:42:51.647715   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.647715   41026 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-918000"
	I0920 15:42:51.647729   41026 addons.go:234] Setting addon ingress-dns=true in "addons-918000"
	I0920 15:42:51.647738   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.647740   41026 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-918000"
	I0920 15:42:51.647661   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.647750   41026 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-918000"
	I0920 15:42:51.647768   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.647777   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.647793   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.647635   41026 addons.go:69] Setting gcp-auth=true in profile "addons-918000"
	I0920 15:42:51.647845   41026 mustload.go:65] Loading cluster: addons-918000
	I0920 15:42:51.647674   41026 addons.go:234] Setting addon storage-provisioner=true in "addons-918000"
	I0920 15:42:51.647932   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.647674   41026 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-918000"
	I0920 15:42:51.648065   41026 config.go:182] Loaded profile config "addons-918000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 15:42:51.648200   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.648298   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.647683   41026 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-918000"
	I0920 15:42:51.648381   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.648386   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.647683   41026 addons.go:69] Setting registry=true in profile "addons-918000"
	I0920 15:42:51.648406   41026 addons.go:234] Setting addon registry=true in "addons-918000"
	I0920 15:42:51.648421   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.648429   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.648387   41026 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-918000"
	I0920 15:42:51.648437   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.648422   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.648552   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.648573   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.647694   41026 addons.go:69] Setting volumesnapshots=true in profile "addons-918000"
	I0920 15:42:51.648599   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.647729   41026 addons.go:234] Setting addon cloud-spanner=true in "addons-918000"
	I0920 15:42:51.647686   41026 addons.go:234] Setting addon ingress=true in "addons-918000"
	I0920 15:42:51.649505   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.649452   41026 addons.go:234] Setting addon volumesnapshots=true in "addons-918000"
	I0920 15:42:51.649730   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.649842   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.649952   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.650752   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.650781   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.652760   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.652818   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.653012   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.690080   41026 out.go:177] * Verifying Kubernetes components...
	I0920 15:42:51.728509   41026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 15:42:51.731420   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.731906   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:51.735417   41026 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-918000"
	I0920 15:42:51.735519   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.736130   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.740370   41026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 15:42:51.745435   41026 addons.go:234] Setting addon default-storageclass=true in "addons-918000"
	I0920 15:42:51.745461   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:42:51.745795   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:42:51.761944   41026 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 15:42:51.761994   41026 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0920 15:42:51.798963   41026 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 15:42:51.799001   41026 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 15:42:51.799192   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:51.835984   41026 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0920 15:42:51.814402   41026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 15:42:51.836037   41026 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 15:42:51.839686   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:51.858127   41026 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 15:42:51.879041   41026 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 15:42:51.916163   41026 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 15:42:51.916180   41026 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 15:42:51.916173   41026 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 15:42:51.916440   41026 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 15:42:51.937060   41026 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 15:42:51.937378   41026 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 15:42:51.916190   41026 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 15:42:51.959999   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 15:42:51.997000   41026 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 15:42:52.018075   41026 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 15:42:52.018096   41026 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 15:42:52.018640   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 15:42:52.055239   41026 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 15:42:52.055873   41026 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 15:42:52.055909   41026 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 15:42:52.056181   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.093011   41026 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 15:42:52.093553   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.093559   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.095730   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.114066   41026 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 15:42:52.114701   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.114701   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 15:42:52.119768   41026 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 15:42:52.150966   41026 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 15:42:52.151546   41026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 15:42:52.151642   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.151660   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.171883   41026 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 15:42:52.172085   41026 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 15:42:52.192989   41026 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 15:42:52.218817   41026 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 15:42:52.230346   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 15:42:52.230376   41026 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 15:42:52.230407   41026 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 15:42:52.251150   41026 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 15:42:52.252078   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.252206   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.269391   41026 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 15:42:52.326200   41026 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 15:42:52.326913   41026 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 15:42:52.333686   41026 start.go:971] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0920 15:42:52.333805   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.364127   41026 out.go:201] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 61063 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 15:42:52.385173   41026 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 15:42:52.386472   41026 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 15:42:52.403379   41026 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 15:42:52.423711   41026 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 15:42:52.423088   41026 out.go:177]   - Using image docker.io/busybox:stable
	I0920 15:42:52.423116   41026 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 15:42:52.441307   41026 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 15:42:52.444648   41026 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 15:42:52.464942   41026 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 15:42:52.465550   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 15:42:52.465627   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 15:42:52.502319   41026 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 15:42:52.503107   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 15:42:52.503189   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.503421   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.523268   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 15:42:52.542854   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.576918   41026 out.go:177] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0920 15:42:52.577113   41026 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 15:42:52.577258   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 15:42:52.577457   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.615012   41026 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 15:42:52.619520   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.648357   41026 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 15:42:52.648379   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 15:42:52.687531   41026 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 15:42:52.687553   41026 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 15:42:52.693868   41026 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 15:42:52.704763   41026 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 15:42:52.704778   41026 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 15:42:52.720974   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 15:42:52.730894   41026 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 15:42:52.754909   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.754931   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.772929   41026 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 15:42:52.776538   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.776549   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.779780   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 15:42:52.810027   41026 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 15:42:52.810151   41026 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 15:42:52.810164   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 15:42:52.810302   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.813714   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.813714   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.850755   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.850775   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.850783   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.851444   41026 node_ready.go:35] waiting up to 6m0s for node "addons-918000" to be "Ready" ...
	I0920 15:42:52.869147   41026 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 15:42:52.874016   41026 node_ready.go:49] node "addons-918000" has status "Ready":"True"
	I0920 15:42:52.874036   41026 node_ready.go:38] duration metric: took 22.569266ms for node "addons-918000" to be "Ready" ...
	I0920 15:42:52.874048   41026 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 15:42:52.883153   41026 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-918000" context rescaled to 1 replicas
	I0920 15:42:52.929141   41026 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 15:42:52.931853   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.951215   41026 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 15:42:52.951242   41026 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 15:42:52.951438   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:52.973252   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:52.987936   41026 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace to be "Ready" ...
	I0920 15:42:53.177540   41026 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 15:42:53.177557   41026 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 15:42:53.273926   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 15:42:53.273948   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 15:42:53.281470   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 15:42:53.380360   41026 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 15:42:53.380388   41026 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 15:42:53.484937   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 15:42:53.488581   41026 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 15:42:53.488594   41026 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 15:42:53.578684   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 15:42:53.582772   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 15:42:53.676205   41026 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 15:42:53.676229   41026 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 15:42:53.679903   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 15:42:53.685647   41026 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 15:42:53.685664   41026 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 15:42:53.685763   41026 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 15:42:53.685774   41026 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 15:42:53.685805   41026 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 15:42:53.685814   41026 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 15:42:53.974957   41026 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 15:42:53.974976   41026 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 15:42:53.984450   41026 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 15:42:53.984475   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 15:42:53.984921   41026 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 15:42:53.984938   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 15:42:53.987189   41026 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 15:42:53.987201   41026 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 15:42:54.273210   41026 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 15:42:54.273239   41026 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 15:42:54.275722   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 15:42:54.281717   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 15:42:54.381760   41026 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 15:42:54.381782   41026 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 15:42:54.585784   41026 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 15:42:54.585817   41026 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 15:42:54.673204   41026 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 15:42:54.673225   41026 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 15:42:54.983757   41026 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 15:42:54.983777   41026 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 15:42:55.077828   41026 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 15:42:55.077845   41026 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 15:42:55.078003   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace has status "Ready":"False"
	I0920 15:42:55.286847   41026 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 15:42:55.286864   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 15:42:55.378837   41026 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 15:42:55.378856   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 15:42:55.580068   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 15:42:55.583833   41026 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 15:42:55.583931   41026 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 15:42:55.985000   41026 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 15:42:55.985047   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 15:42:56.477455   41026 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 15:42:56.478045   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 15:42:56.778925   41026 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 15:42:56.778942   41026 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 15:42:57.080254   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 15:42:57.172980   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace has status "Ready":"False"
	I0920 15:42:58.981075   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.457695245s)
	W0920 15:42:58.981133   41026 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 15:42:58.981167   41026 retry.go:31] will retry after 166.553197ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 15:42:58.981241   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.260200469s)
	I0920 15:42:58.981262   41026 addons.go:475] Verifying addon metrics-server=true in "addons-918000"
	I0920 15:42:58.981301   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.201457201s)
	I0920 15:42:58.981323   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.707316244s)
	I0920 15:42:58.981386   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.707398569s)
	I0920 15:42:58.981411   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.699883248s)
	I0920 15:42:58.981571   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.496572816s)
	I0920 15:42:59.150068   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 15:42:59.287252   41026 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 15:42:59.287354   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:42:59.306818   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:42:59.674961   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace has status "Ready":"False"
	I0920 15:42:59.978344   41026 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 15:43:00.176450   41026 addons.go:234] Setting addon gcp-auth=true in "addons-918000"
	I0920 15:43:00.176513   41026 host.go:66] Checking if "addons-918000" exists ...
	I0920 15:43:00.177183   41026 cli_runner.go:164] Run: docker container inspect addons-918000 --format={{.State.Status}}
	I0920 15:43:00.202374   41026 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 15:43:00.202464   41026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-918000
	I0920 15:43:00.221095   41026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61060 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/addons-918000/id_rsa Username:docker}
	I0920 15:43:01.989573   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.410771451s)
	I0920 15:43:01.989620   41026 addons.go:475] Verifying addon ingress=true in "addons-918000"
	I0920 15:43:02.016541   41026 out.go:177] * Verifying ingress addon...
	I0920 15:43:02.063824   41026 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 15:43:02.078993   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:02.079688   41026 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 15:43:02.079700   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:02.573926   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:03.079696   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:03.585608   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:04.077487   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:04.573771   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:04.575179   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:04.693618   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (11.110723165s)
	I0920 15:43:04.693642   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.013635615s)
	I0920 15:43:04.693723   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.41789829s)
	I0920 15:43:04.693745   41026 addons.go:475] Verifying addon registry=true in "addons-918000"
	I0920 15:43:04.693832   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.411953674s)
	I0920 15:43:04.693881   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (9.113713972s)
	I0920 15:43:04.721375   41026 out.go:177] * Verifying registry addon...
	I0920 15:43:04.767348   41026 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-918000 service yakd-dashboard -n yakd-dashboard
	
	I0920 15:43:04.806546   41026 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 15:43:04.851775   41026 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 15:43:04.851794   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:05.078741   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:05.373495   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:05.581448   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:05.701758   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.621377069s)
	I0920 15:43:05.701788   41026 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-918000"
	I0920 15:43:05.701818   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.551660857s)
	I0920 15:43:05.701854   41026 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.49941278s)
	I0920 15:43:05.725247   41026 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 15:43:05.744983   41026 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 15:43:05.821350   41026 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 15:43:05.878103   41026 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 15:43:05.916081   41026 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 15:43:05.916106   41026 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 15:43:05.923468   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:05.924063   41026 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 15:43:05.924078   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:05.939607   41026 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 15:43:05.939620   41026 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 15:43:05.957085   41026 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 15:43:05.957104   41026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 15:43:05.986459   41026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 15:43:06.074886   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:06.309974   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:06.374030   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:06.571961   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:06.874949   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:06.875787   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:06.997382   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:07.074379   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:07.179125   41026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.192621122s)
	I0920 15:43:07.180380   41026 addons.go:475] Verifying addon gcp-auth=true in "addons-918000"
	I0920 15:43:07.206736   41026 out.go:177] * Verifying gcp-auth addon...
	I0920 15:43:07.281478   41026 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 15:43:07.284268   41026 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 15:43:07.387461   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:07.387953   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:07.572430   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:07.810780   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:07.825418   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:08.067942   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:08.310518   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:08.325501   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:08.494322   41026 pod_ready.go:98] pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:43:08 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:42:52 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:42:52 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:42:52 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:42:52 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2
}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-20 15:42:52 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 15:42:56 -0700 PDT,FinishedAt:2024-09-20 15:43:07 -0700 PDT,ContainerID:docker://e56f81d74bb45e4569c21ab886193343b45536d636e927dd29eb68783994c517,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://e56f81d74bb45e4569c21ab886193343b45536d636e927dd29eb68783994c517 Started:0xc001f87ba0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e35970} {Name:kube-api-access-zt8px MountPath:/var/run/secrets/kubernetes.io/serviceaccount
ReadOnly:true RecursiveReadOnly:0xc001e35980}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 15:43:08.494340   41026 pod_ready.go:82] duration metric: took 15.506273284s for pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace to be "Ready" ...
	E0920 15:43:08.494349   41026 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-5gv2v" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:43:08 -0700 PDT Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:42:52 -0700 PDT Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:42:52 -0700 PDT Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:42:52 -0700 PDT Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-20 15:42:52 -0700 PDT Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.4
9.2 HostIPs:[{IP:192.168.49.2}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-09-20 15:42:52 -0700 PDT InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-09-20 15:42:56 -0700 PDT,FinishedAt:2024-09-20 15:43:07 -0700 PDT,ContainerID:docker://e56f81d74bb45e4569c21ab886193343b45536d636e927dd29eb68783994c517,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e ContainerID:docker://e56f81d74bb45e4569c21ab886193343b45536d636e927dd29eb68783994c517 Started:0xc001f87ba0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc001e35970} {Name:kube-api-access-zt8px MountPath:/var/run/secrets
/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc001e35980}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0920 15:43:08.494358   41026 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:08.567379   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:08.811300   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:08.827318   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:09.070425   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:09.310967   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:09.324884   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:09.568164   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:09.810027   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:09.826718   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:10.070372   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:10.310641   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:10.325892   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:10.500874   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:10.572522   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:10.810513   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:10.824913   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:11.067823   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:11.309933   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:11.326310   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:11.568454   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:11.809614   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:11.824871   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:12.074485   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:12.309971   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:12.326250   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:12.501442   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:12.569514   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:12.812726   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:12.824700   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:13.073156   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:13.310186   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:13.325519   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:13.568234   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:13.810617   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:13.826068   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:14.068674   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:14.309966   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:14.324960   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:14.572700   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:14.809582   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:14.825947   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:15.001647   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:15.070606   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:15.312138   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:15.326380   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:15.569286   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:15.809667   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:15.825826   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:16.067623   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:16.309292   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:16.325890   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:16.568100   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:16.810312   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:16.826307   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:17.073391   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:17.310790   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:17.325593   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:17.500419   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:17.568195   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:17.809332   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:17.826118   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:18.069398   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:18.311119   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:18.325704   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:18.568146   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:18.810849   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:18.826114   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:19.068157   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:19.312298   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:19.327490   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:19.500727   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:19.572829   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:19.809977   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:19.827968   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:20.074467   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:20.310458   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:20.324908   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:20.567848   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:20.810364   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:20.824695   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:21.071305   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:21.310297   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:21.324881   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:21.500959   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:21.568498   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:21.810341   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:21.824840   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:22.068145   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:22.310698   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:22.325075   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:22.573733   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:22.809849   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:22.825375   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:23.068360   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:23.311295   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:23.324958   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:23.568424   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:23.810279   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:23.825152   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:23.999811   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:24.068093   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:24.309315   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:24.324724   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:24.567635   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:24.809494   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:24.826063   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:25.068105   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:25.310331   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:25.324886   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:25.568610   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:25.810385   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:25.826254   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:26.002673   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:26.067333   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:26.309443   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:26.324802   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:26.568155   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:26.810494   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:26.825655   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:27.169815   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:27.310093   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:27.325802   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:27.567768   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:27.809805   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:27.824883   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:28.070301   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:28.310068   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:28.325265   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:28.500785   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:28.567911   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:28.810020   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:28.825167   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:29.067551   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:29.310297   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:29.327332   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:29.569182   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:29.812452   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:29.824875   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:30.067382   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:30.309534   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:30.324803   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:30.500977   41026 pod_ready.go:103] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"False"
	I0920 15:43:30.569495   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:30.809630   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:30.825673   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:31.069635   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:31.309814   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:31.326023   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:31.569576   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:31.810177   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:31.825665   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:32.068925   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:32.310840   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:32.325777   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:32.567280   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:32.810392   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:32.825762   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:32.999627   41026 pod_ready.go:93] pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace has status "Ready":"True"
	I0920 15:43:32.999640   41026 pod_ready.go:82] duration metric: took 24.505091415s for pod "coredns-7c65d6cfc9-drdjn" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:32.999654   41026 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-918000" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.004411   41026 pod_ready.go:93] pod "etcd-addons-918000" in "kube-system" namespace has status "Ready":"True"
	I0920 15:43:33.004426   41026 pod_ready.go:82] duration metric: took 4.761007ms for pod "etcd-addons-918000" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.004435   41026 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-918000" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.009781   41026 pod_ready.go:93] pod "kube-apiserver-addons-918000" in "kube-system" namespace has status "Ready":"True"
	I0920 15:43:33.009792   41026 pod_ready.go:82] duration metric: took 5.350421ms for pod "kube-apiserver-addons-918000" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.009798   41026 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-918000" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.014299   41026 pod_ready.go:93] pod "kube-controller-manager-addons-918000" in "kube-system" namespace has status "Ready":"True"
	I0920 15:43:33.014310   41026 pod_ready.go:82] duration metric: took 4.507836ms for pod "kube-controller-manager-addons-918000" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.014316   41026 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-cg2fb" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.019099   41026 pod_ready.go:93] pod "kube-proxy-cg2fb" in "kube-system" namespace has status "Ready":"True"
	I0920 15:43:33.019109   41026 pod_ready.go:82] duration metric: took 4.789033ms for pod "kube-proxy-cg2fb" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.019115   41026 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-918000" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.068167   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:33.310361   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:33.326009   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:33.403724   41026 pod_ready.go:93] pod "kube-scheduler-addons-918000" in "kube-system" namespace has status "Ready":"True"
	I0920 15:43:33.403738   41026 pod_ready.go:82] duration metric: took 384.615172ms for pod "kube-scheduler-addons-918000" in "kube-system" namespace to be "Ready" ...
	I0920 15:43:33.403747   41026 pod_ready.go:39] duration metric: took 40.529378376s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 15:43:33.403761   41026 api_server.go:52] waiting for apiserver process to appear ...
	I0920 15:43:33.403825   41026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 15:43:33.416718   41026 api_server.go:72] duration metric: took 41.768875137s to wait for apiserver process to appear ...
	I0920 15:43:33.416730   41026 api_server.go:88] waiting for apiserver healthz status ...
	I0920 15:43:33.416750   41026 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:61059/healthz ...
	I0920 15:43:33.421101   41026 api_server.go:279] https://127.0.0.1:61059/healthz returned 200:
	ok
	I0920 15:43:33.422317   41026 api_server.go:141] control plane version: v1.31.1
	I0920 15:43:33.422330   41026 api_server.go:131] duration metric: took 5.595127ms to wait for apiserver health ...
	I0920 15:43:33.422338   41026 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 15:43:33.568473   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:33.602152   41026 system_pods.go:59] 17 kube-system pods found
	I0920 15:43:33.602175   41026 system_pods.go:61] "coredns-7c65d6cfc9-drdjn" [4e57600f-7406-4ace-a23d-c72c8a8a53cb] Running
	I0920 15:43:33.602181   41026 system_pods.go:61] "csi-hostpath-attacher-0" [571b4dca-0bd8-4f15-a5f1-6726a6211736] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 15:43:33.602185   41026 system_pods.go:61] "csi-hostpath-resizer-0" [ad5ca475-b639-42e5-b181-174e672cc5f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 15:43:33.602191   41026 system_pods.go:61] "csi-hostpathplugin-29v65" [5d03ed9a-5109-4900-86f3-f80c395f8bb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 15:43:33.602194   41026 system_pods.go:61] "etcd-addons-918000" [b1861005-9538-458b-8504-4df573f60464] Running
	I0920 15:43:33.602197   41026 system_pods.go:61] "kube-apiserver-addons-918000" [088bd735-1d94-46b1-a3bf-14b9f2f95cc2] Running
	I0920 15:43:33.602200   41026 system_pods.go:61] "kube-controller-manager-addons-918000" [83185b09-a1aa-4998-bff4-de7eb9f86510] Running
	I0920 15:43:33.602203   41026 system_pods.go:61] "kube-ingress-dns-minikube" [da67f9d9-15de-4d97-a262-0943f777bc74] Running
	I0920 15:43:33.602206   41026 system_pods.go:61] "kube-proxy-cg2fb" [2470b9f0-4995-481c-98a7-a29b1b024c1b] Running
	I0920 15:43:33.602209   41026 system_pods.go:61] "kube-scheduler-addons-918000" [3d55536f-f423-42a1-aa9b-ec1af35bed7d] Running
	I0920 15:43:33.602214   41026 system_pods.go:61] "metrics-server-84c5f94fbc-4k7vz" [906fee52-1f0b-46f7-a88d-aa43570f58dc] Running
	I0920 15:43:33.602219   41026 system_pods.go:61] "nvidia-device-plugin-daemonset-jlk6j" [b183cbd1-aab3-4216-a691-a6d5e8512427] Running
	I0920 15:43:33.602221   41026 system_pods.go:61] "registry-66c9cd494c-lnt2v" [78e593b3-9f6d-4e81-a44a-8d0c99ad1e53] Running
	I0920 15:43:33.602225   41026 system_pods.go:61] "registry-proxy-t5vth" [2c21fbde-2aec-4dbb-b6db-fbc24c448343] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 15:43:33.602231   41026 system_pods.go:61] "snapshot-controller-56fcc65765-m4p7r" [1117f65d-83d2-4c22-abf0-aed7b573df30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 15:43:33.602235   41026 system_pods.go:61] "snapshot-controller-56fcc65765-qsp4g" [24eb5704-62d2-4161-9055-a94d87a45448] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 15:43:33.602239   41026 system_pods.go:61] "storage-provisioner" [2c82c337-a991-4ab2-87a2-2a4e3832b835] Running
	I0920 15:43:33.602243   41026 system_pods.go:74] duration metric: took 179.899712ms to wait for pod list to return data ...
	I0920 15:43:33.602249   41026 default_sa.go:34] waiting for default service account to be created ...
	I0920 15:43:33.797484   41026 default_sa.go:45] found service account: "default"
	I0920 15:43:33.797498   41026 default_sa.go:55] duration metric: took 195.241944ms for default service account to be created ...
	I0920 15:43:33.797505   41026 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 15:43:33.809250   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:33.825244   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:34.003174   41026 system_pods.go:86] 17 kube-system pods found
	I0920 15:43:34.003191   41026 system_pods.go:89] "coredns-7c65d6cfc9-drdjn" [4e57600f-7406-4ace-a23d-c72c8a8a53cb] Running
	I0920 15:43:34.003197   41026 system_pods.go:89] "csi-hostpath-attacher-0" [571b4dca-0bd8-4f15-a5f1-6726a6211736] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 15:43:34.003202   41026 system_pods.go:89] "csi-hostpath-resizer-0" [ad5ca475-b639-42e5-b181-174e672cc5f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 15:43:34.003207   41026 system_pods.go:89] "csi-hostpathplugin-29v65" [5d03ed9a-5109-4900-86f3-f80c395f8bb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 15:43:34.003212   41026 system_pods.go:89] "etcd-addons-918000" [b1861005-9538-458b-8504-4df573f60464] Running
	I0920 15:43:34.003215   41026 system_pods.go:89] "kube-apiserver-addons-918000" [088bd735-1d94-46b1-a3bf-14b9f2f95cc2] Running
	I0920 15:43:34.003218   41026 system_pods.go:89] "kube-controller-manager-addons-918000" [83185b09-a1aa-4998-bff4-de7eb9f86510] Running
	I0920 15:43:34.003229   41026 system_pods.go:89] "kube-ingress-dns-minikube" [da67f9d9-15de-4d97-a262-0943f777bc74] Running
	I0920 15:43:34.003232   41026 system_pods.go:89] "kube-proxy-cg2fb" [2470b9f0-4995-481c-98a7-a29b1b024c1b] Running
	I0920 15:43:34.003235   41026 system_pods.go:89] "kube-scheduler-addons-918000" [3d55536f-f423-42a1-aa9b-ec1af35bed7d] Running
	I0920 15:43:34.003238   41026 system_pods.go:89] "metrics-server-84c5f94fbc-4k7vz" [906fee52-1f0b-46f7-a88d-aa43570f58dc] Running
	I0920 15:43:34.003241   41026 system_pods.go:89] "nvidia-device-plugin-daemonset-jlk6j" [b183cbd1-aab3-4216-a691-a6d5e8512427] Running
	I0920 15:43:34.003244   41026 system_pods.go:89] "registry-66c9cd494c-lnt2v" [78e593b3-9f6d-4e81-a44a-8d0c99ad1e53] Running
	I0920 15:43:34.003250   41026 system_pods.go:89] "registry-proxy-t5vth" [2c21fbde-2aec-4dbb-b6db-fbc24c448343] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 15:43:34.003254   41026 system_pods.go:89] "snapshot-controller-56fcc65765-m4p7r" [1117f65d-83d2-4c22-abf0-aed7b573df30] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 15:43:34.003260   41026 system_pods.go:89] "snapshot-controller-56fcc65765-qsp4g" [24eb5704-62d2-4161-9055-a94d87a45448] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 15:43:34.003263   41026 system_pods.go:89] "storage-provisioner" [2c82c337-a991-4ab2-87a2-2a4e3832b835] Running
	I0920 15:43:34.003268   41026 system_pods.go:126] duration metric: took 205.758199ms to wait for k8s-apps to be running ...
	I0920 15:43:34.003273   41026 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 15:43:34.003335   41026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 15:43:34.037649   41026 system_svc.go:56] duration metric: took 34.366647ms WaitForService to wait for kubelet
	I0920 15:43:34.037672   41026 kubeadm.go:582] duration metric: took 42.389825042s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 15:43:34.037691   41026 node_conditions.go:102] verifying NodePressure condition ...
	I0920 15:43:34.071828   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:34.199143   41026 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0920 15:43:34.199176   41026 node_conditions.go:123] node cpu capacity is 12
	I0920 15:43:34.199197   41026 node_conditions.go:105] duration metric: took 161.498047ms to run NodePressure ...
	I0920 15:43:34.199213   41026 start.go:241] waiting for startup goroutines ...
	I0920 15:43:34.309709   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:34.325241   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:34.574669   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:34.810511   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:34.828682   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:35.068453   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:35.310508   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:35.325332   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:35.568943   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:35.811888   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 15:43:35.826363   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:36.068766   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:36.311848   41026 kapi.go:107] duration metric: took 31.505064628s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 15:43:36.325777   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:36.569924   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:36.826321   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:37.072346   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:37.387623   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:37.569262   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:37.825884   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:38.067900   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:38.328748   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:38.570808   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:38.826635   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:39.068323   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:39.326149   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:39.568626   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:39.826190   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:40.067754   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:40.326295   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:40.573184   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:40.825889   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:41.073809   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:41.325488   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:41.571469   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:41.894560   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:42.067766   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:42.327846   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:42.570016   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:42.829102   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:43.068625   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:43.326221   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:43.568780   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:43.826616   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:44.067740   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:44.327053   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:44.573834   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:44.825717   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:45.070101   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:45.326742   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:45.571064   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:45.826183   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:46.069655   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:46.325980   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:46.570632   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:46.826192   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:47.067580   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:47.326159   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:47.568753   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:47.825431   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:48.068414   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:48.326187   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:48.573851   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:48.826466   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:49.071894   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:49.327213   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:49.569377   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:49.826423   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:50.068146   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:50.326531   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:50.567959   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:50.825227   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:51.074473   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:51.387907   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:51.568624   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:51.828340   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:52.068681   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:52.326578   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:52.574729   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:52.825656   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:53.073053   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:53.325373   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:53.568258   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:53.825708   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:54.073025   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:54.326820   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:54.568806   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:54.826541   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:55.070495   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:55.328897   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:55.570528   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:55.825446   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:56.073103   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:56.327759   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:56.573020   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:56.826567   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:57.069131   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:57.326513   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:57.568624   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:57.827149   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:58.068052   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:58.325613   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:58.571773   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:58.825583   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:59.070548   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:59.387649   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:43:59.568520   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:43:59.826588   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:00.069207   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:00.326146   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:00.568153   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:00.827502   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:01.068525   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:01.327625   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:01.568601   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:01.826607   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:02.072986   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:02.325814   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:02.569726   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:02.828214   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:03.068271   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:03.326957   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:03.568894   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:03.827305   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:04.068557   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:04.325629   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:04.567888   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:04.825821   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:05.073353   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:05.327710   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:05.568628   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:05.828837   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:06.068911   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:06.326443   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:06.567952   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:06.826151   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:07.074427   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:07.328153   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:07.568218   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:07.873911   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:08.069322   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:08.325723   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:08.567797   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:08.826195   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:09.068660   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:09.325993   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:09.569826   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:09.826219   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:10.070689   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:10.325315   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:10.568906   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:10.826357   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:11.068405   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:11.326521   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:11.568501   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:11.826054   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:12.068496   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:12.325896   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:12.569626   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:12.826100   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:13.073179   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:13.326657   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:13.572044   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:13.825471   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:14.069286   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:14.326159   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:14.568298   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:14.825843   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:15.072308   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:15.326549   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:15.573945   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:15.825729   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:16.069251   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:16.387358   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:16.568388   41026 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 15:44:16.828549   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:17.069120   41026 kapi.go:107] duration metric: took 1m15.00473348s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 15:44:17.325337   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:17.826048   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:18.325749   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:18.826274   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:19.325589   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:19.825496   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:20.327896   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:20.826027   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:21.326487   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:21.826067   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:22.327423   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:22.826355   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:23.325812   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:23.826949   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:24.325884   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:24.826550   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:25.326536   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:25.825575   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:26.326054   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:26.826397   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 15:44:27.325620   41026 kapi.go:107] duration metric: took 1m21.503654257s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 15:44:30.286833   41026 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 15:44:30.286845   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:30.787069   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:31.289035   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:31.788579   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:32.287427   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:32.786611   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:33.288775   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:33.788590   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:34.285899   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:34.787411   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:35.287301   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:35.788685   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:36.286715   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:36.788135   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:37.287068   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:37.786163   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:38.289232   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:38.788869   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:39.288871   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:39.788329   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:40.287712   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:40.787818   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:41.288636   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:41.785803   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:42.287349   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:42.787249   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:43.287222   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:43.787695   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:44.287862   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:44.789221   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:45.288361   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:45.787786   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:46.285703   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:46.788513   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:47.288559   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:47.789110   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:48.286798   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:48.787366   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:49.288926   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:49.789196   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:50.288511   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:50.786737   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:51.286756   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:51.789084   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:52.288710   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:52.787156   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:53.286664   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:53.788683   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:54.288512   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:54.788080   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:55.288481   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:55.789356   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:56.286490   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:56.788637   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:57.288969   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:57.787226   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:58.289095   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:58.788320   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:59.288716   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:44:59.786612   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:00.287466   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:00.788054   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:01.287725   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:01.789192   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:02.286746   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:02.786716   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:03.290305   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:03.788928   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:04.288057   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:04.786848   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:05.288427   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:05.786324   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:06.285112   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:06.788167   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:07.288587   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:07.789074   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:08.287528   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:08.787696   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:09.287582   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:09.788841   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:10.286790   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:10.786431   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:11.287535   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:11.787534   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:12.287048   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:12.788288   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:13.287519   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:13.790097   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:14.286669   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:14.786541   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:15.288488   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:15.788387   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:16.286548   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:16.788598   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:17.287439   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:17.789290   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:18.287643   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:18.787036   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:19.285958   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:19.785979   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:20.288586   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:20.786503   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:21.288909   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:21.785327   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:22.287540   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:22.787490   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:23.286873   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:23.786613   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:24.287551   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:24.789120   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:25.287729   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:25.789098   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:26.286988   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:26.787737   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:27.287077   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:27.786978   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:28.287157   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:28.787923   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:29.288833   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:29.789328   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:30.286991   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:30.787534   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:31.288963   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:31.786356   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:32.287597   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:32.787197   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:33.288179   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:33.788023   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:34.287661   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:34.787894   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:35.289198   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:35.786042   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:36.287401   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:36.786843   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:37.288888   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:37.786637   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:38.286996   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:38.785410   41026 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 15:45:39.289714   41026 kapi.go:107] duration metric: took 2m32.007091645s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 15:45:39.312614   41026 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-918000 cluster.
	I0920 15:45:39.387741   41026 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 15:45:39.446518   41026 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 15:45:39.488685   41026 out.go:177] * Enabled addons: metrics-server, cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, default-storageclass, volcano, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0920 15:45:39.547897   41026 addons.go:510] duration metric: took 2m47.899077168s for enable addons: enabled=[metrics-server cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns default-storageclass volcano inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0920 15:45:39.547963   41026 start.go:246] waiting for cluster config update ...
	I0920 15:45:39.547992   41026 start.go:255] writing updated cluster config ...
	I0920 15:45:39.548708   41026 ssh_runner.go:195] Run: rm -f paused
	I0920 15:45:39.592989   41026 start.go:600] kubectl: 1.29.2, cluster: 1.31.1 (minor skew: 2)
	I0920 15:45:39.613739   41026 out.go:201] 
	W0920 15:45:39.634607   41026 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1.
	I0920 15:45:39.671910   41026 out.go:177]   - Want kubectl v1.31.1? Try 'minikube kubectl -- get pods -A'
	I0920 15:45:39.750743   41026 out.go:177] * Done! kubectl is now configured to use "addons-918000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 20 22:54:38 addons-918000 dockerd[1238]: time="2024-09-20T22:54:38.715206462Z" level=info msg="ignoring event" container=05837c792d0541ce2dd90df570c175e5a6592368bdfecba278513e4b4e397d73 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:54:38 addons-918000 dockerd[1238]: time="2024-09-20T22:54:38.843025149Z" level=info msg="ignoring event" container=86cb99f0f36f966622bcb9b87a67ea93474167f70f368e980ce1802c189bb39b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:54:48 addons-918000 dockerd[1238]: time="2024-09-20T22:54:48.279880375Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=900ae170e0d3e1e1 traceID=234b5726a03c3a7dedf441823ca0bd84
	Sep 20 22:54:48 addons-918000 dockerd[1238]: time="2024-09-20T22:54:48.282500145Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=900ae170e0d3e1e1 traceID=234b5726a03c3a7dedf441823ca0bd84
	Sep 20 22:54:49 addons-918000 dockerd[1238]: time="2024-09-20T22:54:49.360840013Z" level=info msg="ignoring event" container=9fd92d028d9349f8ac5bca77d5cb78ff5ad4eba2eee4ee1050a0ad2633937d27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:54:49 addons-918000 dockerd[1238]: time="2024-09-20T22:54:49.469431530Z" level=info msg="ignoring event" container=cc3f5d0937a836aaf55884fdcaa67dd8ddb5c32325130b0bcc0ecfd26b7d36b8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:54:59 addons-918000 dockerd[1238]: time="2024-09-20T22:54:59.961837819Z" level=info msg="ignoring event" container=97297411ac51ad2d69ce8f1cdeca7f5d44bc0c685200f05faf3f0c846d1a5a0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:55:00 addons-918000 dockerd[1238]: time="2024-09-20T22:55:00.116734268Z" level=info msg="ignoring event" container=a3fea923126343ae74d08179aba641c6fcfc71fbacd52aabc1188b47f727facc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:55:00 addons-918000 cri-dockerd[1509]: time="2024-09-20T22:55:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc33677695fb2f7cbda1c01ae94e8a8bf68bc64815d8ab594a9b6c7a344c80a5/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 20 22:55:01 addons-918000 dockerd[1238]: time="2024-09-20T22:55:01.002157378Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" spanID=244bedcde1a357fe traceID=9faa725b05cd5282551eaa38ae0c0800
	Sep 20 22:55:02 addons-918000 cri-dockerd[1509]: time="2024-09-20T22:55:02Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 20 22:55:02 addons-918000 dockerd[1238]: time="2024-09-20T22:55:02.438117379Z" level=info msg="ignoring event" container=1d3435b089fd1dc7c36860679383c08b88a7e2b86433e99c21385be6f06ef6d9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:55:03 addons-918000 dockerd[1238]: time="2024-09-20T22:55:03.610032809Z" level=info msg="ignoring event" container=bc33677695fb2f7cbda1c01ae94e8a8bf68bc64815d8ab594a9b6c7a344c80a5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:55:05 addons-918000 cri-dockerd[1509]: time="2024-09-20T22:55:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0750d4ecde79ad338bc98fc86d7d9310956ed3b933fd28fcd4c3dc6894f3cb04/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 20 22:55:07 addons-918000 cri-dockerd[1509]: time="2024-09-20T22:55:07Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 20 22:55:07 addons-918000 dockerd[1238]: time="2024-09-20T22:55:07.567116813Z" level=info msg="ignoring event" container=2b6a90462732d44641c08352e240f1033afcb9331f489090de21c0a58391730b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:55:08 addons-918000 cri-dockerd[1509]: time="2024-09-20T22:55:08Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 20 22:55:08 addons-918000 dockerd[1238]: time="2024-09-20T22:55:08.782969455Z" level=info msg="ignoring event" container=93c828a27e0ab7bfa27cf82a8195a3ecce10cd33760f17b6aa8fccc821de0193 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:55:08 addons-918000 dockerd[1238]: time="2024-09-20T22:55:08.807742269Z" level=info msg="ignoring event" container=0750d4ecde79ad338bc98fc86d7d9310956ed3b933fd28fcd4c3dc6894f3cb04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:55:10 addons-918000 cri-dockerd[1509]: time="2024-09-20T22:55:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1f917dfece1e8e5be11e5fa4fe2713eaecad343bd9b422b1ebed46a189e1ab8a/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 20 22:55:10 addons-918000 dockerd[1238]: time="2024-09-20T22:55:10.410933626Z" level=info msg="ignoring event" container=daf670c8410055933a7e0d1a2984b8e80239e7ced4dc95c7b80bb23d59347c5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:55:11 addons-918000 dockerd[1238]: time="2024-09-20T22:55:11.273489280Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=b30568582acca86e traceID=03c84de4a4f5493c7437d6057dd08d1d
	Sep 20 22:55:11 addons-918000 dockerd[1238]: time="2024-09-20T22:55:11.276296870Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=b30568582acca86e traceID=03c84de4a4f5493c7437d6057dd08d1d
	Sep 20 22:55:11 addons-918000 dockerd[1238]: time="2024-09-20T22:55:11.866249347Z" level=info msg="ignoring event" container=1f917dfece1e8e5be11e5fa4fe2713eaecad343bd9b422b1ebed46a189e1ab8a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 20 22:55:34 addons-918000 dockerd[1238]: time="2024-09-20T22:55:34.448089678Z" level=info msg="ignoring event" container=3708b0eb0574ec5e55fa7a5debe0e0663338e9716cb08531a34627d92bef8763 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	daf670c841005       a416a98b71e22                                                                                                                                25 seconds ago      Exited              helper-pod                               0                   1f917dfece1e8       helper-pod-delete-pvc-1e953f3f-0f81-401a-ab56-c6fd2854bea4
	93c828a27e0ab       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            27 seconds ago      Exited              gadget                                   7                   d6ebdac193f56       gadget-x2jgz
	2b6a90462732d       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                                              28 seconds ago      Exited              busybox                                  0                   0750d4ecde79a       test-local-path
	1d3435b089fd1       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                                              33 seconds ago      Exited              helper-pod                               0                   bc33677695fb2       helper-pod-create-pvc-1e953f3f-0f81-401a-ab56-c6fd2854bea4
	d465e0053bc49       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   1e627bdf30bcb       gcp-auth-89d5ffd79-4q8xh
	93611b2a18abb       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   db32e24cce15c       csi-hostpathplugin-29v65
	3832984ed7cdd       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   db32e24cce15c       csi-hostpathplugin-29v65
	98fd759e7a2e7       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   db32e24cce15c       csi-hostpathplugin-29v65
	5aa1746b46918       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   db32e24cce15c       csi-hostpathplugin-29v65
	8182105f3fa4a       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   d6b0051d349fa       ingress-nginx-controller-bc57996ff-95twz
	f9166a446b6cf       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   db32e24cce15c       csi-hostpathplugin-29v65
	7d25024b81361       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   08cc1170b63b1       csi-hostpath-attacher-0
	6d3688c7d1932       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   4415fe11bf4b9       csi-hostpath-resizer-0
	a8f1d8ccff2de       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   db32e24cce15c       csi-hostpathplugin-29v65
	60147eeaf6f20       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              patch                                    0                   a78ee5a16357a       ingress-nginx-admission-patch-9fbgb
	e1816edf1ff52       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              create                                   0                   79951b15b3f05       ingress-nginx-admission-create-pjv82
	3ec693da64065       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   dfcf0fd48d21c       snapshot-controller-56fcc65765-m4p7r
	a673349d1dba5       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   f248a95f80342       snapshot-controller-56fcc65765-qsp4g
	50dcca095e723       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       11 minutes ago      Running             local-path-provisioner                   0                   f00283a2f30a4       local-path-provisioner-86d989889c-jh99n
	d6a82d32edadb       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367                              12 minutes ago      Running             registry-proxy                           0                   50b682abcf2c6       registry-proxy-t5vth
	2c2825e29e913       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                                             12 minutes ago      Running             registry                                 0                   0a9894843009d       registry-66c9cd494c-lnt2v
	9ef03de657756       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             12 minutes ago      Running             minikube-ingress-dns                     0                   6099e7dcbcf58       kube-ingress-dns-minikube
	167c915120b94       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago      Running             metrics-server                           0                   43f4202736919       metrics-server-84c5f94fbc-4k7vz
	8e943bcf4c406       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               12 minutes ago      Running             cloud-spanner-emulator                   0                   5702b60059bb4       cloud-spanner-emulator-769b77f747-t4tvj
	1a82b40d4d163       6e38f40d628db                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   22f2563f10e6c       storage-provisioner
	636c128f75269       c69fa2e9cbf5f                                                                                                                                12 minutes ago      Running             coredns                                  0                   03455f95f98a4       coredns-7c65d6cfc9-drdjn
	8f11984032d7e       60c005f310ff3                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   d0b210af972ad       kube-proxy-cg2fb
	c26c12f74543e       6bab7719df100                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   e69bd5f506836       kube-apiserver-addons-918000
	6e9be016c18f9       9aa1fad941575                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   26239db6a653e       kube-scheduler-addons-918000
	f179f22065168       175ffd71cce3d                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   2375486dbc5f7       kube-controller-manager-addons-918000
	405ee11771ddb       2e96e5913fc06                                                                                                                                12 minutes ago      Running             etcd                                     0                   c7992ab363130       etcd-addons-918000
	
	
	==> controller_ingress [8182105f3fa4] <==
	W0920 22:44:16.301304       6 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0920 22:44:16.301455       6 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0920 22:44:16.307558       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/amd64"
	I0920 22:44:16.642308       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0920 22:44:16.654911       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0920 22:44:16.661598       6 nginx.go:271] "Starting NGINX Ingress controller"
	I0920 22:44:16.668125       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"2f90d776-d6a8-4d41-a03e-4e0774acfff3", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0920 22:44:16.669837       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"39ed165a-1594-4850-b8d1-8a01b49dc39c", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0920 22:44:16.669899       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"44b960dc-5c88-45dd-9204-da1219b5b1a1", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0920 22:44:17.871139       6 nginx.go:317] "Starting NGINX process"
	I0920 22:44:17.871500       6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0920 22:44:17.871600       6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0920 22:44:17.872436       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0920 22:44:17.877293       6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0920 22:44:17.877331       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-95twz"
	I0920 22:44:17.886688       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-95twz" node="addons-918000"
	I0920 22:44:17.898524       6 controller.go:213] "Backend successfully reloaded"
	I0920 22:44:17.898672       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0920 22:44:17.898774       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-95twz", UID:"e1cdd6be-beb3-4d82-8c75-2ed71fd45f4e", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [636c128f7526] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.8:46615 - 30571 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000297367s
	[INFO] 10.244.0.8:46615 - 1129 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000319007s
	[INFO] 10.244.0.8:58560 - 5596 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00010227s
	[INFO] 10.244.0.8:58560 - 11998 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000110962s
	[INFO] 10.244.0.8:56606 - 21632 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000159915s
	[INFO] 10.244.0.8:56606 - 46467 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000316965s
	[INFO] 10.244.0.8:35874 - 44717 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000138204s
	[INFO] 10.244.0.8:35874 - 9618 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000168082s
	[INFO] 10.244.0.8:38402 - 47900 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000128152s
	[INFO] 10.244.0.8:38402 - 30495 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000221878s
	[INFO] 10.244.0.8:47429 - 55783 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000092064s
	[INFO] 10.244.0.8:47429 - 4065 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000211311s
	[INFO] 10.244.0.8:56747 - 58930 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000063426s
	[INFO] 10.244.0.8:56747 - 9520 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000037861s
	[INFO] 10.244.0.8:57299 - 10895 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066738s
	[INFO] 10.244.0.8:57299 - 25229 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067734s
	[INFO] 10.244.0.25:36064 - 27952 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000291967s
	[INFO] 10.244.0.25:36911 - 21678 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000361197s
	[INFO] 10.244.0.25:57305 - 41386 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134212s
	[INFO] 10.244.0.25:39878 - 40418 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000114332s
	[INFO] 10.244.0.25:45255 - 21295 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086203s
	[INFO] 10.244.0.25:38597 - 10469 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109046s
	[INFO] 10.244.0.25:35064 - 18850 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 192 0.003178987s
	[INFO] 10.244.0.25:37378 - 2890 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004498791s
	
	
	==> describe nodes <==
	Name:               addons-918000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-918000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b921bee7dddd4990dd76a4773b23d7ec11e6144f
	                    minikube.k8s.io/name=addons-918000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T15_42_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-918000
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-918000"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 22:42:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-918000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 22:55:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 22:55:22 +0000   Fri, 20 Sep 2024 22:42:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 22:55:22 +0000   Fri, 20 Sep 2024 22:42:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 22:55:22 +0000   Fri, 20 Sep 2024 22:42:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 22:55:22 +0000   Fri, 20 Sep 2024 22:42:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-918000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             8027444Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             8027444Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1588e7a4b4e4c538e89dde4a46d8fd7
	  System UUID:                f1588e7a4b4e4c538e89dde4a46d8fd7
	  Boot ID:                    d49d530d-a588-4fa0-8aea-3c2ab189fae2
	  Kernel Version:             6.6.32-linuxkit
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.3.0
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-t4tvj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-x2jgz                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-4q8xh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-95twz    100m (0%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-drdjn                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-29v65                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-918000                          100m (0%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-918000                250m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-918000       200m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-cg2fb                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-918000                100m (0%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-4k7vz             100m (0%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 registry-66c9cd494c-lnt2v                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-t5vth                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-m4p7r        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-qsp4g        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-jh99n     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (7%)   0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-918000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node addons-918000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-918000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-918000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-918000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-918000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node addons-918000 event: Registered Node addons-918000 in Controller
	
	
	==> dmesg <==
	[Sep20 19:33] overlay: /var/lib/docker/overlay2/l/4PUK55QWYLHTCT42EKJJXTPMUE is not a directory
	[  +0.000978] overlay: /var/lib/docker/overlay2/l/4PUK55QWYLHTCT42EKJJXTPMUE is not a directory
	[  +0.002163] overlay: /var/lib/docker/overlay2/l/4PUK55QWYLHTCT42EKJJXTPMUE is not a directory
	[  +0.001308] overlay: /var/lib/docker/overlay2/l/4PUK55QWYLHTCT42EKJJXTPMUE is not a directory
	[  +0.264579] overlay: /var/lib/docker/overlay2/l/4PUK55QWYLHTCT42EKJJXTPMUE is not a directory
	[  +0.001196] overlay: /var/lib/docker/overlay2/l/4PUK55QWYLHTCT42EKJJXTPMUE is not a directory
	[  +0.000020] overlay: /var/lib/docker/overlay2/l/4PUK55QWYLHTCT42EKJJXTPMUE is not a directory
	[  +0.001093] overlay: /var/lib/docker/overlay2/l/4PUK55QWYLHTCT42EKJJXTPMUE is not a directory
	
	
	==> etcd [405ee11771dd] <==
	{"level":"info","ts":"2024-09-20T22:42:58.475191Z","caller":"traceutil/trace.go:171","msg":"trace[1678585395] transaction","detail":"{read_only:false; response_revision:577; number_of_response:1; }","duration":"102.022393ms","start":"2024-09-20T22:42:58.373149Z","end":"2024-09-20T22:42:58.475171Z","steps":["trace[1678585395] 'process raft request'  (duration: 101.576013ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:42:58.475232Z","caller":"traceutil/trace.go:171","msg":"trace[1271427881] transaction","detail":"{read_only:false; response_revision:576; number_of_response:1; }","duration":"102.173587ms","start":"2024-09-20T22:42:58.373046Z","end":"2024-09-20T22:42:58.475219Z","steps":["trace[1271427881] 'process raft request'  (duration: 101.605228ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:42:59.971628Z","caller":"traceutil/trace.go:171","msg":"trace[2083651886] transaction","detail":"{read_only:false; response_revision:660; number_of_response:1; }","duration":"101.5427ms","start":"2024-09-20T22:42:59.870030Z","end":"2024-09-20T22:42:59.971572Z","steps":["trace[2083651886] 'process raft request'  (duration: 13.181348ms)","trace[2083651886] 'compare'  (duration: 88.016847ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T22:43:05.879007Z","caller":"traceutil/trace.go:171","msg":"trace[1131715262] transaction","detail":"{read_only:false; response_revision:921; number_of_response:1; }","duration":"177.744122ms","start":"2024-09-20T22:43:05.701225Z","end":"2024-09-20T22:43:05.878969Z","steps":["trace[1131715262] 'process raft request'  (duration: 177.610544ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:43:05.918725Z","caller":"traceutil/trace.go:171","msg":"trace[1293247006] linearizableReadLoop","detail":"{readStateIndex:943; appliedIndex:941; }","duration":"134.277688ms","start":"2024-09-20T22:43:05.784436Z","end":"2024-09-20T22:43:05.918714Z","steps":["trace[1293247006] 'read index received'  (duration: 94.778801ms)","trace[1293247006] 'applied index is now lower than readState.Index'  (duration: 39.49853ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T22:43:05.918819Z","caller":"traceutil/trace.go:171","msg":"trace[98973411] transaction","detail":"{read_only:false; response_revision:922; number_of_response:1; }","duration":"217.568482ms","start":"2024-09-20T22:43:05.701245Z","end":"2024-09-20T22:43:05.918814Z","steps":["trace[98973411] 'process raft request'  (duration: 217.089938ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T22:43:05.918948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.952018ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T22:43:05.918991Z","caller":"traceutil/trace.go:171","msg":"trace[1569558827] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:924; }","duration":"110.003826ms","start":"2024-09-20T22:43:05.808977Z","end":"2024-09-20T22:43:05.918981Z","steps":["trace[1569558827] 'agreement among raft nodes before linearized reading'  (duration: 109.921507ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:43:05.919066Z","caller":"traceutil/trace.go:171","msg":"trace[1674659129] transaction","detail":"{read_only:false; response_revision:923; number_of_response:1; }","duration":"217.222747ms","start":"2024-09-20T22:43:05.701834Z","end":"2024-09-20T22:43:05.919057Z","steps":["trace[1674659129] 'process raft request'  (duration: 216.835703ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T22:43:05.919249Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.922446ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-attacher\" ","response":"range_response_count:1 size:535"}
	{"level":"info","ts":"2024-09-20T22:43:05.919285Z","caller":"traceutil/trace.go:171","msg":"trace[1374486179] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-attacher; range_end:; response_count:1; response_revision:924; }","duration":"134.957486ms","start":"2024-09-20T22:43:05.784315Z","end":"2024-09-20T22:43:05.919273Z","steps":["trace[1374486179] 'agreement among raft nodes before linearized reading'  (duration: 134.850457ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T22:43:05.919489Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.166818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/csi-hostpathplugin-sa\" ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2024-09-20T22:43:05.919571Z","caller":"traceutil/trace.go:171","msg":"trace[2143186676] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/csi-hostpathplugin-sa; range_end:; response_count:1; response_revision:924; }","duration":"135.249852ms","start":"2024-09-20T22:43:05.784315Z","end":"2024-09-20T22:43:05.919565Z","steps":["trace[2143186676] 'agreement among raft nodes before linearized reading'  (duration: 135.126938ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:43:07.281112Z","caller":"traceutil/trace.go:171","msg":"trace[2134755702] transaction","detail":"{read_only:false; response_revision:958; number_of_response:1; }","duration":"101.629675ms","start":"2024-09-20T22:43:07.179467Z","end":"2024-09-20T22:43:07.281097Z","steps":["trace[2134755702] 'process raft request'  (duration: 27.319106ms)","trace[2134755702] 'compare'  (duration: 74.096345ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T22:43:27.166551Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.944431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-20T22:43:27.166651Z","caller":"traceutil/trace.go:171","msg":"trace[1222296892] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1034; }","duration":"101.049749ms","start":"2024-09-20T22:43:27.065592Z","end":"2024-09-20T22:43:27.166642Z","steps":["trace[1222296892] 'range keys from in-memory index tree'  (duration: 100.912466ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:52:43.227007Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1867}
	{"level":"info","ts":"2024-09-20T22:52:43.277758Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1867,"took":"50.397697ms","hash":3457316755,"current-db-size-bytes":8597504,"current-db-size":"8.6 MB","current-db-size-in-use-bytes":4808704,"current-db-size-in-use":"4.8 MB"}
	{"level":"info","ts":"2024-09-20T22:52:43.277809Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3457316755,"revision":1867,"compact-revision":-1}
	{"level":"info","ts":"2024-09-20T22:54:25.118593Z","caller":"traceutil/trace.go:171","msg":"trace[120953285] linearizableReadLoop","detail":"{readStateIndex:2628; appliedIndex:2626; }","duration":"130.710163ms","start":"2024-09-20T22:54:24.987871Z","end":"2024-09-20T22:54:25.118581Z","steps":["trace[120953285] 'read index received'  (duration: 93.477437ms)","trace[120953285] 'applied index is now lower than readState.Index'  (duration: 37.232119ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T22:54:25.118796Z","caller":"traceutil/trace.go:171","msg":"trace[1891567519] transaction","detail":"{read_only:false; response_revision:2454; number_of_response:1; }","duration":"131.804939ms","start":"2024-09-20T22:54:24.986984Z","end":"2024-09-20T22:54:25.118789Z","steps":["trace[1891567519] 'process raft request'  (duration: 94.221866ms)","trace[1891567519] 'compare'  (duration: 37.104177ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-20T22:54:25.118948Z","caller":"traceutil/trace.go:171","msg":"trace[256111853] transaction","detail":"{read_only:false; response_revision:2455; number_of_response:1; }","duration":"131.085982ms","start":"2024-09-20T22:54:24.987856Z","end":"2024-09-20T22:54:25.118942Z","steps":["trace[256111853] 'process raft request'  (duration: 130.691394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-20T22:54:25.119197Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.317481ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/headlamp-7b5c95b59d-lss9w\" ","response":"range_response_count:1 size:2795"}
	{"level":"info","ts":"2024-09-20T22:54:25.119262Z","caller":"traceutil/trace.go:171","msg":"trace[326292866] range","detail":"{range_begin:/registry/pods/headlamp/headlamp-7b5c95b59d-lss9w; range_end:; response_count:1; response_revision:2455; }","duration":"131.386232ms","start":"2024-09-20T22:54:24.987868Z","end":"2024-09-20T22:54:25.119254Z","steps":["trace[326292866] 'agreement among raft nodes before linearized reading'  (duration: 131.230015ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T22:54:25.258551Z","caller":"traceutil/trace.go:171","msg":"trace[643037116] transaction","detail":"{read_only:false; response_revision:2457; number_of_response:1; }","duration":"135.509715ms","start":"2024-09-20T22:54:25.123021Z","end":"2024-09-20T22:54:25.258531Z","steps":["trace[643037116] 'process raft request'  (duration: 114.312124ms)","trace[643037116] 'compare'  (duration: 21.045483ms)"],"step_count":2}
	
	
	==> gcp-auth [d465e0053bc4] <==
	2024/09/20 22:45:38 GCP Auth Webhook started!
	2024/09/20 22:45:55 Ready to marshal response ...
	2024/09/20 22:45:55 Ready to write response ...
	2024/09/20 22:45:56 Ready to marshal response ...
	2024/09/20 22:45:56 Ready to write response ...
	2024/09/20 22:46:20 Ready to marshal response ...
	2024/09/20 22:46:20 Ready to write response ...
	2024/09/20 22:46:20 Ready to marshal response ...
	2024/09/20 22:46:20 Ready to write response ...
	2024/09/20 22:46:20 Ready to marshal response ...
	2024/09/20 22:46:20 Ready to write response ...
	2024/09/20 22:54:24 Ready to marshal response ...
	2024/09/20 22:54:24 Ready to write response ...
	2024/09/20 22:54:24 Ready to marshal response ...
	2024/09/20 22:54:24 Ready to write response ...
	2024/09/20 22:54:24 Ready to marshal response ...
	2024/09/20 22:54:24 Ready to write response ...
	2024/09/20 22:54:34 Ready to marshal response ...
	2024/09/20 22:54:34 Ready to write response ...
	2024/09/20 22:55:00 Ready to marshal response ...
	2024/09/20 22:55:00 Ready to write response ...
	2024/09/20 22:55:00 Ready to marshal response ...
	2024/09/20 22:55:00 Ready to write response ...
	2024/09/20 22:55:09 Ready to marshal response ...
	2024/09/20 22:55:09 Ready to write response ...
	
	
	==> kernel <==
	 22:55:36 up  6:24,  0 users,  load average: 3.33, 3.27, 2.09
	Linux addons-918000 6.6.32-linuxkit #1 SMP PREEMPT_DYNAMIC Thu Jun 13 14:14:43 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c26c12f74543] <==
	W0920 22:45:10.235498       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.86.233:443: connect: connection refused
	E0920 22:45:10.235542       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.86.233:443: connect: connection refused" logger="UnhandledError"
	I0920 22:45:55.064701       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0920 22:45:55.077242       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0920 22:46:10.608767       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0920 22:46:10.673147       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0920 22:46:10.989926       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 22:46:11.088046       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 22:46:11.088608       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0920 22:46:11.089263       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0920 22:46:11.592435       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0920 22:46:11.696713       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 22:46:11.771651       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	I0920 22:46:11.888130       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0920 22:46:12.179576       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0920 22:46:12.189391       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0920 22:46:12.375478       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0920 22:46:12.698244       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0920 22:46:12.889219       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0920 22:46:13.179197       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0920 22:54:24.956069       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.182.70"}
	E0920 22:55:10.709763       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0920 22:55:10.714362       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0920 22:55:10.718663       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0920 22:55:25.721677       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [f179f2206516] <==
	I0920 22:54:38.690862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="7.41µs"
	W0920 22:54:39.541913       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:54:39.541977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:54:40.946838       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:54:40.946901       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 22:54:48.773197       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0920 22:54:49.332171       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="5.034µs"
	W0920 22:54:50.012957       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:54:50.013019       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 22:54:51.864056       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-918000"
	W0920 22:54:57.456453       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:54:57.456519       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 22:54:59.409687       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0920 22:55:10.230048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="3.238µs"
	W0920 22:55:11.660269       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:55:11.660346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:55:19.642152       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:55:19.642218       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0920 22:55:22.437038       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-918000"
	W0920 22:55:26.803845       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:55:26.803910       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:55:27.139634       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:55:27.139729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0920 22:55:30.777936       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0920 22:55:30.778039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [8f11984032d7] <==
	I0920 22:42:57.586078       1 server_linux.go:66] "Using iptables proxy"
	I0920 22:42:58.276137       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 22:42:58.276203       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 22:42:58.580903       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 22:42:58.580960       1 server_linux.go:169] "Using iptables Proxier"
	I0920 22:42:58.585625       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 22:42:58.586010       1 server.go:483] "Version info" version="v1.31.1"
	I0920 22:42:58.586027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 22:42:58.587842       1 config.go:199] "Starting service config controller"
	I0920 22:42:58.587862       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 22:42:58.587889       1 config.go:105] "Starting endpoint slice config controller"
	I0920 22:42:58.587894       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 22:42:58.588208       1 config.go:328] "Starting node config controller"
	I0920 22:42:58.588219       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 22:42:58.771199       1 shared_informer.go:320] Caches are synced for node config
	I0920 22:42:58.771256       1 shared_informer.go:320] Caches are synced for service config
	I0920 22:42:58.771272       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [6e9be016c18f] <==
	W0920 22:42:44.097880       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 22:42:44.097893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:44.098609       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 22:42:44.098629       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:44.097522       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 22:42:44.099650       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:44.169151       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 22:42:44.169205       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:44.918955       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 22:42:44.919006       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:44.985751       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 22:42:44.985800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:45.020414       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 22:42:45.020615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:45.025673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 22:42:45.025750       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:45.108707       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 22:42:45.108781       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:45.113843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 22:42:45.113902       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:45.196316       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 22:42:45.196392       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 22:42:45.400674       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 22:42:45.400763       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0920 22:42:47.994971       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.001288    2356 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/abc2d150-0584-4ea6-bfe9-697259e20a7d-script\") pod \"abc2d150-0584-4ea6-bfe9-697259e20a7d\" (UID: \"abc2d150-0584-4ea6-bfe9-697259e20a7d\") "
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.001768    2356 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc2d150-0584-4ea6-bfe9-697259e20a7d-data" (OuterVolumeSpecName: "data") pod "abc2d150-0584-4ea6-bfe9-697259e20a7d" (UID: "abc2d150-0584-4ea6-bfe9-697259e20a7d"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.001881    2356 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abc2d150-0584-4ea6-bfe9-697259e20a7d-script" (OuterVolumeSpecName: "script") pod "abc2d150-0584-4ea6-bfe9-697259e20a7d" (UID: "abc2d150-0584-4ea6-bfe9-697259e20a7d"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.001916    2356 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abc2d150-0584-4ea6-bfe9-697259e20a7d-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "abc2d150-0584-4ea6-bfe9-697259e20a7d" (UID: "abc2d150-0584-4ea6-bfe9-697259e20a7d"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.004221    2356 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abc2d150-0584-4ea6-bfe9-697259e20a7d-kube-api-access-xjvq9" (OuterVolumeSpecName: "kube-api-access-xjvq9") pod "abc2d150-0584-4ea6-bfe9-697259e20a7d" (UID: "abc2d150-0584-4ea6-bfe9-697259e20a7d"). InnerVolumeSpecName "kube-api-access-xjvq9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.091130    2356 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-lnt2v" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.101676    2356 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/abc2d150-0584-4ea6-bfe9-697259e20a7d-gcp-creds\") on node \"addons-918000\" DevicePath \"\""
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.101742    2356 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/abc2d150-0584-4ea6-bfe9-697259e20a7d-script\") on node \"addons-918000\" DevicePath \"\""
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.101757    2356 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-xjvq9\" (UniqueName: \"kubernetes.io/projected/abc2d150-0584-4ea6-bfe9-697259e20a7d-kube-api-access-xjvq9\") on node \"addons-918000\" DevicePath \"\""
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.101774    2356 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/abc2d150-0584-4ea6-bfe9-697259e20a7d-data\") on node \"addons-918000\" DevicePath \"\""
	Sep 20 22:55:12 addons-918000 kubelet[2356]: I0920 22:55:12.802560    2356 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f917dfece1e8e5be11e5fa4fe2713eaecad343bd9b422b1ebed46a189e1ab8a"
	Sep 20 22:55:16 addons-918000 kubelet[2356]: E0920 22:55:16.095029    2356 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="88828e8a-ddb5-4550-9c25-036357fce8eb"
	Sep 20 22:55:16 addons-918000 kubelet[2356]: I0920 22:55:16.101944    2356 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abc2d150-0584-4ea6-bfe9-697259e20a7d" path="/var/lib/kubelet/pods/abc2d150-0584-4ea6-bfe9-697259e20a7d/volumes"
	Sep 20 22:55:24 addons-918000 kubelet[2356]: E0920 22:55:24.094020    2356 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="90b784f3-1d5c-4dd7-86f4-5771ca53cd9e"
	Sep 20 22:55:26 addons-918000 kubelet[2356]: I0920 22:55:26.091992    2356 scope.go:117] "RemoveContainer" containerID="93c828a27e0ab7bfa27cf82a8195a3ecce10cd33760f17b6aa8fccc821de0193"
	Sep 20 22:55:26 addons-918000 kubelet[2356]: E0920 22:55:26.092184    2356 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-x2jgz_gadget(3f2ad424-bb11-443c-8582-1196248c0d80)\"" pod="gadget/gadget-x2jgz" podUID="3f2ad424-bb11-443c-8582-1196248c0d80"
	Sep 20 22:55:29 addons-918000 kubelet[2356]: I0920 22:55:29.091083    2356 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-t5vth" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 22:55:30 addons-918000 kubelet[2356]: E0920 22:55:30.093887    2356 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="88828e8a-ddb5-4550-9c25-036357fce8eb"
	Sep 20 22:55:34 addons-918000 kubelet[2356]: I0920 22:55:34.663031    2356 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqdpg\" (UniqueName: \"kubernetes.io/projected/90b784f3-1d5c-4dd7-86f4-5771ca53cd9e-kube-api-access-fqdpg\") pod \"90b784f3-1d5c-4dd7-86f4-5771ca53cd9e\" (UID: \"90b784f3-1d5c-4dd7-86f4-5771ca53cd9e\") "
	Sep 20 22:55:34 addons-918000 kubelet[2356]: I0920 22:55:34.663080    2356 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/90b784f3-1d5c-4dd7-86f4-5771ca53cd9e-gcp-creds\") pod \"90b784f3-1d5c-4dd7-86f4-5771ca53cd9e\" (UID: \"90b784f3-1d5c-4dd7-86f4-5771ca53cd9e\") "
	Sep 20 22:55:34 addons-918000 kubelet[2356]: I0920 22:55:34.663184    2356 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90b784f3-1d5c-4dd7-86f4-5771ca53cd9e-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "90b784f3-1d5c-4dd7-86f4-5771ca53cd9e" (UID: "90b784f3-1d5c-4dd7-86f4-5771ca53cd9e"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 20 22:55:34 addons-918000 kubelet[2356]: I0920 22:55:34.664917    2356 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90b784f3-1d5c-4dd7-86f4-5771ca53cd9e-kube-api-access-fqdpg" (OuterVolumeSpecName: "kube-api-access-fqdpg") pod "90b784f3-1d5c-4dd7-86f4-5771ca53cd9e" (UID: "90b784f3-1d5c-4dd7-86f4-5771ca53cd9e"). InnerVolumeSpecName "kube-api-access-fqdpg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 20 22:55:34 addons-918000 kubelet[2356]: I0920 22:55:34.765604    2356 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/90b784f3-1d5c-4dd7-86f4-5771ca53cd9e-gcp-creds\") on node \"addons-918000\" DevicePath \"\""
	Sep 20 22:55:34 addons-918000 kubelet[2356]: I0920 22:55:34.765694    2356 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-fqdpg\" (UniqueName: \"kubernetes.io/projected/90b784f3-1d5c-4dd7-86f4-5771ca53cd9e-kube-api-access-fqdpg\") on node \"addons-918000\" DevicePath \"\""
	Sep 20 22:55:36 addons-918000 kubelet[2356]: I0920 22:55:36.103800    2356 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90b784f3-1d5c-4dd7-86f4-5771ca53cd9e" path="/var/lib/kubelet/pods/90b784f3-1d5c-4dd7-86f4-5771ca53cd9e/volumes"
	
	
	==> storage-provisioner [1a82b40d4d16] <==
	I0920 22:43:00.977845       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 22:43:01.074490       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 22:43:01.074562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 22:43:01.083812       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 22:43:01.083990       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-918000_e6e07327-2b49-4e7d-b244-dfa1ef22029d!
	I0920 22:43:01.085543       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2883320a-22b5-4c90-81ab-136c6b2d5c2b", APIVersion:"v1", ResourceVersion:"683", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-918000_e6e07327-2b49-4e7d-b244-dfa1ef22029d became leader
	I0920 22:43:01.185366       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-918000_e6e07327-2b49-4e7d-b244-dfa1ef22029d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-918000 -n addons-918000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-918000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-pjv82 ingress-nginx-admission-patch-9fbgb
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-918000 describe pod busybox ingress-nginx-admission-create-pjv82 ingress-nginx-admission-patch-9fbgb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-918000 describe pod busybox ingress-nginx-admission-create-pjv82 ingress-nginx-admission-patch-9fbgb: exit status 1 (58.734969ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-918000/192.168.49.2
	Start Time:       Fri, 20 Sep 2024 15:46:20 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cs2kk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-cs2kk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m17s                  default-scheduler  Successfully assigned default/busybox to addons-918000
	  Normal   Pulling    7m41s (x4 over 9m16s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m41s (x4 over 9m16s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m41s (x4 over 9m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m27s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m9s (x20 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pjv82" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9fbgb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-918000 describe pod busybox ingress-nginx-admission-create-pjv82 ingress-nginx-admission-patch-9fbgb: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.34s)

                                                
                                    

Test pass (323/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.15
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.35
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.1/json-events 7.87
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.32
18 TestDownloadOnly/v1.31.1/DeleteAll 0.35
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.21
20 TestDownloadOnlyKic 1.53
21 TestBinaryMirror 1.35
22 TestOffline 64.46
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
27 TestAddons/Setup 211.5
29 TestAddons/serial/Volcano 40.66
31 TestAddons/serial/GCPAuth/Namespaces 0.1
35 TestAddons/parallel/InspektorGadget 10.65
36 TestAddons/parallel/MetricsServer 5.61
38 TestAddons/parallel/CSI 55.78
39 TestAddons/parallel/Headlamp 19.69
40 TestAddons/parallel/CloudSpanner 5.53
41 TestAddons/parallel/LocalPath 52.74
42 TestAddons/parallel/NvidiaDevicePlugin 5.49
43 TestAddons/parallel/Yakd 10.62
44 TestAddons/StoppedEnableDisable 11.4
45 TestCertOptions 22.4
46 TestCertExpiration 224.79
47 TestDockerFlags 21.98
48 TestForceSystemdFlag 23.63
49 TestForceSystemdEnv 21.23
52 TestHyperKitDriverInstallOrUpdate 8.85
55 TestErrorSpam/setup 18.83
56 TestErrorSpam/start 2.2
57 TestErrorSpam/status 0.81
58 TestErrorSpam/pause 1.41
59 TestErrorSpam/unpause 1.48
60 TestErrorSpam/stop 11.22
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 61.58
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 37.06
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.21
72 TestFunctional/serial/CacheCmd/cache/add_local 1.39
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
74 TestFunctional/serial/CacheCmd/cache/list 0.08
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.51
77 TestFunctional/serial/CacheCmd/cache/delete 0.17
78 TestFunctional/serial/MinikubeKubectlCmd 1.21
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.57
80 TestFunctional/serial/ExtraConfig 40.18
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 3
83 TestFunctional/serial/LogsFileCmd 2.84
84 TestFunctional/serial/InvalidService 4.47
86 TestFunctional/parallel/ConfigCmd 0.49
87 TestFunctional/parallel/DashboardCmd 15.36
88 TestFunctional/parallel/DryRun 1.42
89 TestFunctional/parallel/InternationalLanguage 0.58
90 TestFunctional/parallel/StatusCmd 0.82
95 TestFunctional/parallel/AddonsCmd 0.23
96 TestFunctional/parallel/PersistentVolumeClaim 27.09
98 TestFunctional/parallel/SSHCmd 0.51
99 TestFunctional/parallel/CpCmd 1.7
100 TestFunctional/parallel/MySQL 26.44
101 TestFunctional/parallel/FileSync 0.27
102 TestFunctional/parallel/CertSync 1.77
106 TestFunctional/parallel/NodeLabels 0.06
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.29
110 TestFunctional/parallel/License 0.64
111 TestFunctional/parallel/Version/short 0.1
112 TestFunctional/parallel/Version/components 0.69
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
117 TestFunctional/parallel/ImageCommands/ImageBuild 2.77
118 TestFunctional/parallel/ImageCommands/Setup 1.8
119 TestFunctional/parallel/DockerEnv/bash 1.04
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.79
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.53
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.51
130 TestFunctional/parallel/ServiceCmd/DeployApp 23.14
131 TestFunctional/parallel/ServiceCmd/List 0.45
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.15
138 TestFunctional/parallel/ServiceCmd/HTTPS 15
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
140 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
144 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
145 TestFunctional/parallel/ServiceCmd/Format 15
146 TestFunctional/parallel/ServiceCmd/URL 15
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
148 TestFunctional/parallel/MountCmd/any-port 7.92
149 TestFunctional/parallel/ProfileCmd/profile_list 0.63
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
151 TestFunctional/parallel/MountCmd/specific-port 1.78
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.99
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 88.44
160 TestMultiControlPlane/serial/DeployApp 6.03
161 TestMultiControlPlane/serial/PingHostFromPods 1.39
162 TestMultiControlPlane/serial/AddWorkerNode 16.5
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
165 TestMultiControlPlane/serial/CopyFile 16.1
166 TestMultiControlPlane/serial/StopSecondaryNode 11.42
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
168 TestMultiControlPlane/serial/RestartSecondaryNode 39.72
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 228.18
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.43
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
173 TestMultiControlPlane/serial/StopCluster 32.59
174 TestMultiControlPlane/serial/RestartCluster 81.76
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
176 TestMultiControlPlane/serial/AddSecondaryNode 34.89
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
180 TestImageBuild/serial/Setup 18.35
181 TestImageBuild/serial/NormalBuild 1.81
182 TestImageBuild/serial/BuildWithBuildArg 0.83
183 TestImageBuild/serial/BuildWithDockerIgnore 0.63
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.88
188 TestJSONOutput/start/Command 32.82
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.46
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.48
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.69
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.58
213 TestKicCustomNetwork/create_custom_network 20.57
214 TestKicCustomNetwork/use_default_bridge_network 21.65
215 TestKicExistingNetwork 20.42
216 TestKicCustomSubnet 20.17
217 TestKicStaticIP 20.89
218 TestMainNoArgs 0.08
219 TestMinikubeProfile 42.73
222 TestMountStart/serial/StartWithMountFirst 6.24
223 TestMountStart/serial/VerifyMountFirst 0.26
224 TestMountStart/serial/StartWithMountSecond 6.35
225 TestMountStart/serial/VerifyMountSecond 0.26
226 TestMountStart/serial/DeleteFirst 1.64
227 TestMountStart/serial/VerifyMountPostDelete 0.26
228 TestMountStart/serial/Stop 1.44
229 TestMountStart/serial/RestartStopped 7.92
230 TestMountStart/serial/VerifyMountPostStop 0.26
233 TestMultiNode/serial/FreshStart2Nodes 65.36
234 TestMultiNode/serial/DeployApp2Nodes 54.54
235 TestMultiNode/serial/PingHostFrom2Pods 0.94
236 TestMultiNode/serial/AddNode 12.99
237 TestMultiNode/serial/MultiNodeLabels 0.06
238 TestMultiNode/serial/ProfileList 0.66
239 TestMultiNode/serial/CopyFile 9.33
240 TestMultiNode/serial/StopNode 2.25
241 TestMultiNode/serial/StartAfterStop 9.95
242 TestMultiNode/serial/RestartKeepsNodes 98.58
243 TestMultiNode/serial/DeleteNode 5.26
244 TestMultiNode/serial/StopMultiNode 21.5
245 TestMultiNode/serial/RestartMultiNode 56.92
246 TestMultiNode/serial/ValidateNameConflict 22.19
250 TestPreload 99.64
252 TestScheduledStopUnix 91.46
253 TestSkaffold 111.01
255 TestInsufficientStorage 7.97
256 TestRunningBinaryUpgrade 93.19
258 TestKubernetesUpgrade 330.1
259 TestMissingContainerUpgrade 84.66
271 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 9.8
272 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 12.29
273 TestStoppedBinaryUpgrade/Setup 1.02
274 TestStoppedBinaryUpgrade/Upgrade 59.72
275 TestStoppedBinaryUpgrade/MinikubeLogs 3.12
277 TestPause/serial/Start 33.83
278 TestPause/serial/SecondStartNoReconfiguration 32.34
279 TestPause/serial/Pause 0.53
280 TestPause/serial/VerifyStatus 0.26
281 TestPause/serial/Unpause 0.56
282 TestPause/serial/PauseAgain 0.57
283 TestPause/serial/DeletePaused 2.03
284 TestPause/serial/VerifyDeletedResources 15.11
293 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
294 TestNoKubernetes/serial/StartWithK8s 18.38
295 TestNoKubernetes/serial/StartWithStopK8s 7.28
296 TestNoKubernetes/serial/Start 5.68
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
298 TestNoKubernetes/serial/ProfileList 0.96
299 TestNoKubernetes/serial/Stop 1.43
300 TestNoKubernetes/serial/StartNoArgs 7.25
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
302 TestNetworkPlugins/group/auto/Start 33.87
303 TestNetworkPlugins/group/auto/KubeletFlags 0.26
304 TestNetworkPlugins/group/auto/NetCatPod 11.2
305 TestNetworkPlugins/group/auto/DNS 0.14
306 TestNetworkPlugins/group/auto/Localhost 0.11
307 TestNetworkPlugins/group/auto/HairPin 0.11
308 TestNetworkPlugins/group/calico/Start 58.66
309 TestNetworkPlugins/group/custom-flannel/Start 40.76
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.27
312 TestNetworkPlugins/group/calico/NetCatPod 12.18
313 TestNetworkPlugins/group/calico/DNS 0.14
314 TestNetworkPlugins/group/calico/Localhost 0.16
315 TestNetworkPlugins/group/calico/HairPin 0.12
316 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
317 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.18
318 TestNetworkPlugins/group/false/Start 30.18
319 TestNetworkPlugins/group/custom-flannel/DNS 0.14
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
322 TestNetworkPlugins/group/kindnet/Start 53.98
323 TestNetworkPlugins/group/false/KubeletFlags 0.27
324 TestNetworkPlugins/group/false/NetCatPod 10.26
325 TestNetworkPlugins/group/false/DNS 16.41
326 TestNetworkPlugins/group/false/Localhost 0.12
327 TestNetworkPlugins/group/false/HairPin 0.11
328 TestNetworkPlugins/group/flannel/Start 31.14
329 TestNetworkPlugins/group/kindnet/ControllerPod 6
330 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
331 TestNetworkPlugins/group/kindnet/NetCatPod 10.17
332 TestNetworkPlugins/group/kindnet/DNS 0.15
333 TestNetworkPlugins/group/kindnet/Localhost 0.12
334 TestNetworkPlugins/group/kindnet/HairPin 0.13
335 TestNetworkPlugins/group/flannel/ControllerPod 7.01
336 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
337 TestNetworkPlugins/group/flannel/NetCatPod 11.17
338 TestNetworkPlugins/group/enable-default-cni/Start 56.64
339 TestNetworkPlugins/group/flannel/DNS 0.13
340 TestNetworkPlugins/group/flannel/Localhost 0.11
341 TestNetworkPlugins/group/flannel/HairPin 0.12
342 TestNetworkPlugins/group/bridge/Start 62.33
343 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.27
344 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.17
345 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
346 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
347 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
348 TestNetworkPlugins/group/kubenet/Start 32.4
349 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
350 TestNetworkPlugins/group/bridge/NetCatPod 10.17
351 TestNetworkPlugins/group/bridge/DNS 0.14
352 TestNetworkPlugins/group/bridge/Localhost 0.12
353 TestNetworkPlugins/group/bridge/HairPin 0.11
354 TestNetworkPlugins/group/kubenet/KubeletFlags 0.26
355 TestNetworkPlugins/group/kubenet/NetCatPod 11.18
357 TestStartStop/group/old-k8s-version/serial/FirstStart 145.32
358 TestNetworkPlugins/group/kubenet/DNS 21.22
359 TestNetworkPlugins/group/kubenet/Localhost 0.1
360 TestNetworkPlugins/group/kubenet/HairPin 0.13
362 TestStartStop/group/no-preload/serial/FirstStart 48.97
363 TestStartStop/group/no-preload/serial/DeployApp 9.23
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.9
365 TestStartStop/group/no-preload/serial/Stop 10.89
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.33
367 TestStartStop/group/no-preload/serial/SecondStart 275.55
368 TestStartStop/group/old-k8s-version/serial/DeployApp 8.31
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
370 TestStartStop/group/old-k8s-version/serial/Stop 10.88
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.35
372 TestStartStop/group/old-k8s-version/serial/SecondStart 141.08
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
376 TestStartStop/group/old-k8s-version/serial/Pause 2.35
378 TestStartStop/group/embed-certs/serial/FirstStart 31.74
379 TestStartStop/group/embed-certs/serial/DeployApp 8.23
380 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
381 TestStartStop/group/embed-certs/serial/Stop 10.77
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.34
383 TestStartStop/group/embed-certs/serial/SecondStart 298.33
384 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
387 TestStartStop/group/no-preload/serial/Pause 2.37
389 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 56.37
390 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.24
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.91
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.77
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.34
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.32
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
396 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
398 TestStartStop/group/embed-certs/serial/Pause 2.46
400 TestStartStop/group/newest-cni/serial/FirstStart 22.83
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
403 TestStartStop/group/newest-cni/serial/Stop 9.6
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
405 TestStartStop/group/newest-cni/serial/SecondStart 15.09
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
409 TestStartStop/group/newest-cni/serial/Pause 2.55
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.34
x
+
TestDownloadOnly/v1.20.0/json-events (25.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-642000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-642000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (25.149365809s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 15:41:55.194554   40830 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0920 15:41:55.194720   40830 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-642000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-642000: exit status 85 (297.051636ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-642000 | jenkins | v1.34.0 | 20 Sep 24 15:41 PDT |          |
	|         | -p download-only-642000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 15:41:30
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 15:41:30.101632   40831 out.go:345] Setting OutFile to fd 1 ...
	I0920 15:41:30.101956   40831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 15:41:30.101961   40831 out.go:358] Setting ErrFile to fd 2...
	I0920 15:41:30.101965   40831 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 15:41:30.102143   40831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
	W0920 15:41:30.102242   40831 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19672-40263/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19672-40263/.minikube/config/config.json: no such file or directory
	I0920 15:41:30.104302   40831 out.go:352] Setting JSON to true
	I0920 15:41:30.129243   40831 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":22253,"bootTime":1726849837,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 15:41:30.129342   40831 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 15:41:30.151874   40831 out.go:97] [download-only-642000] minikube v1.34.0 on Darwin 14.6.1
	I0920 15:41:30.152080   40831 notify.go:220] Checking for updates...
	W0920 15:41:30.152120   40831 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 15:41:30.173225   40831 out.go:169] MINIKUBE_LOCATION=19672
	I0920 15:41:30.194638   40831 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	I0920 15:41:30.216641   40831 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 15:41:30.237379   40831 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 15:41:30.258619   40831 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	W0920 15:41:30.301274   40831 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 15:41:30.301807   40831 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 15:41:30.325848   40831 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0920 15:41:30.325997   40831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 15:41:30.409004   40831 info.go:266] docker info: {ID:5cf611e6-fa9d-4ecb-b0dd-438e8c824220 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:75 SystemTime:2024-09-20 22:41:30.399921821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:11 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:8220102656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0920 15:41:30.430589   40831 out.go:97] Using the docker driver based on user configuration
	I0920 15:41:30.430697   40831 start.go:297] selected driver: docker
	I0920 15:41:30.430713   40831 start.go:901] validating driver "docker" against <nil>
	I0920 15:41:30.431003   40831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 15:41:30.513771   40831 info.go:266] docker info: {ID:5cf611e6-fa9d-4ecb-b0dd-438e8c824220 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:75 SystemTime:2024-09-20 22:41:30.504935995 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:11 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:8220102656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0920 15:41:30.513985   40831 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 15:41:30.517379   40831 start_flags.go:393] Using suggested 7791MB memory alloc based on sys=32768MB, container=7839MB
	I0920 15:41:30.517536   40831 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 15:41:30.538576   40831 out.go:169] Using Docker Desktop driver with root privileges
	I0920 15:41:30.560359   40831 cni.go:84] Creating CNI manager for ""
	I0920 15:41:30.560503   40831 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0920 15:41:30.560644   40831 start.go:340] cluster config:
	{Name:download-only-642000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:7791 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 15:41:30.582385   40831 out.go:97] Starting "download-only-642000" primary control-plane node in "download-only-642000" cluster
	I0920 15:41:30.582432   40831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 15:41:30.603515   40831 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 15:41:30.603604   40831 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 15:41:30.603689   40831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 15:41:30.622691   40831 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 15:41:30.622982   40831 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 15:41:30.623122   40831 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 15:41:30.665158   40831 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 15:41:30.665183   40831 cache.go:56] Caching tarball of preloaded images
	I0920 15:41:30.666389   40831 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 15:41:30.688622   40831 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 15:41:30.688649   40831 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 15:41:30.782174   40831 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0920 15:41:37.784930   40831 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 15:41:37.785099   40831 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0920 15:41:38.333496   40831 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0920 15:41:38.333718   40831 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/download-only-642000/config.json ...
	I0920 15:41:38.333742   40831 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/download-only-642000/config.json: {Name:mk2c4bd3780590a1bccc5125cbe2db4997a7fc1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 15:41:38.334885   40831 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0920 15:41:38.335266   40831 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-642000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-642000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-642000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (7.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-926000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-926000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker : (7.868516376s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (7.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 15:42:03.924803   40830 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0920 15:42:03.924842   40830 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-926000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-926000: exit status 85 (320.631176ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-642000 | jenkins | v1.34.0 | 20 Sep 24 15:41 PDT |                     |
	|         | -p download-only-642000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 15:41 PDT | 20 Sep 24 15:41 PDT |
	| delete  | -p download-only-642000        | download-only-642000 | jenkins | v1.34.0 | 20 Sep 24 15:41 PDT | 20 Sep 24 15:41 PDT |
	| start   | -o=json --download-only        | download-only-926000 | jenkins | v1.34.0 | 20 Sep 24 15:41 PDT |                     |
	|         | -p download-only-926000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 15:41:56
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.23.0 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 15:41:56.111143   40882 out.go:345] Setting OutFile to fd 1 ...
	I0920 15:41:56.111445   40882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 15:41:56.111450   40882 out.go:358] Setting ErrFile to fd 2...
	I0920 15:41:56.111454   40882 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 15:41:56.111630   40882 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
	I0920 15:41:56.113066   40882 out.go:352] Setting JSON to true
	I0920 15:41:56.135916   40882 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":22279,"bootTime":1726849837,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 15:41:56.136084   40882 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 15:41:56.157729   40882 out.go:97] [download-only-926000] minikube v1.34.0 on Darwin 14.6.1
	I0920 15:41:56.157966   40882 notify.go:220] Checking for updates...
	I0920 15:41:56.179295   40882 out.go:169] MINIKUBE_LOCATION=19672
	I0920 15:41:56.200555   40882 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	I0920 15:41:56.222530   40882 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 15:41:56.265343   40882 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 15:41:56.286613   40882 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	W0920 15:41:56.330460   40882 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 15:41:56.331003   40882 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 15:41:56.355141   40882 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0920 15:41:56.355277   40882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 15:41:56.436982   40882 info.go:266] docker info: {ID:5cf611e6-fa9d-4ecb-b0dd-438e8c824220 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:75 SystemTime:2024-09-20 22:41:56.427681784 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:11 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:8220102656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0920 15:41:56.463621   40882 out.go:97] Using the docker driver based on user configuration
	I0920 15:41:56.463666   40882 start.go:297] selected driver: docker
	I0920 15:41:56.463684   40882 start.go:901] validating driver "docker" against <nil>
	I0920 15:41:56.463965   40882 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 15:41:56.547260   40882 info.go:266] docker info: {ID:5cf611e6-fa9d-4ecb-b0dd-438e8c824220 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:false NGoroutines:75 SystemTime:2024-09-20 22:41:56.538733517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:11 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:8220102656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0920 15:41:56.547481   40882 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 15:41:56.550471   40882 start_flags.go:393] Using suggested 7791MB memory alloc based on sys=32768MB, container=7839MB
	I0920 15:41:56.550657   40882 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 15:41:56.572104   40882 out.go:169] Using Docker Desktop driver with root privileges
	I0920 15:41:56.593281   40882 cni.go:84] Creating CNI manager for ""
	I0920 15:41:56.593401   40882 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0920 15:41:56.593416   40882 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0920 15:41:56.593551   40882 start.go:340] cluster config:
	{Name:download-only-926000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:7791 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-926000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 15:41:56.615318   40882 out.go:97] Starting "download-only-926000" primary control-plane node in "download-only-926000" cluster
	I0920 15:41:56.615363   40882 cache.go:121] Beginning downloading kic base image for docker with docker
	I0920 15:41:56.635828   40882 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0920 15:41:56.635942   40882 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 15:41:56.636067   40882 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0920 15:41:56.654704   40882 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0920 15:41:56.654956   40882 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0920 15:41:56.654975   40882 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0920 15:41:56.654990   40882 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0920 15:41:56.654999   40882 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0920 15:41:56.697150   40882 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	I0920 15:41:56.697175   40882 cache.go:56] Caching tarball of preloaded images
	I0920 15:41:56.698199   40882 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0920 15:41:56.719641   40882 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 15:41:56.719669   40882 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4 ...
	I0920 15:41:56.815693   40882 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4?checksum=md5:42e9a173dd5f0c45ed1a890dd06aec5a -> /Users/jenkins/minikube-integration/19672-40263/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-926000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-926000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-926000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.53s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-090000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-090000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-090000
--- PASS: TestDownloadOnlyKic (1.53s)

                                                
                                    
x
+
TestBinaryMirror (1.35s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 15:42:06.761729   40830 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-929000 --alsologtostderr --binary-mirror http://127.0.0.1:61048 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-929000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-929000
--- PASS: TestBinaryMirror (1.35s)

                                                
                                    
x
+
TestOffline (64.46s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-716000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-716000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m2.423230702s)
helpers_test.go:175: Cleaning up "offline-docker-716000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-716000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-716000: (2.040585407s)
--- PASS: TestOffline (64.46s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-918000
addons_test.go:975: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-918000: exit status 85 (189.734601ms)

                                                
                                                
-- stdout --
	* Profile "addons-918000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-918000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-918000
addons_test.go:986: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-918000: exit status 85 (210.126172ms)

                                                
                                                
-- stdout --
	* Profile "addons-918000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-918000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (211.5s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-918000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-darwin-amd64 start -p addons-918000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns: (3m31.498900212s)
--- PASS: TestAddons/Setup (211.50s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 13.412779ms
addons_test.go:843: volcano-admission stabilized in 13.447403ms
addons_test.go:851: volcano-controller stabilized in 13.471448ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-6mm6v" [9c8bbd9f-aa28-4b47-9846-ee08abd09219] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.006888991s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-czkr2" [e8e246dc-19bd-4f73-b35d-7086cf203fb1] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005787713s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-2pxrv" [6dc127ea-40c8-42f1-9d3b-86d53fa804f0] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004637232s
addons_test.go:870: (dbg) Run:  kubectl --context addons-918000 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-918000 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-918000 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [80a0352a-699a-44bf-86c1-8807f1fe1a01] Pending
helpers_test.go:344: "test-job-nginx-0" [80a0352a-699a-44bf-86c1-8807f1fe1a01] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [80a0352a-699a-44bf-86c1-8807f1fe1a01] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.005347049s
addons_test.go:906: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-darwin-amd64 -p addons-918000 addons disable volcano --alsologtostderr -v=1: (10.357191808s)
--- PASS: TestAddons/serial/Volcano (40.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-918000 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-918000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-x2jgz" [3f2ad424-bb11-443c-8582-1196248c0d80] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004885491s
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-918000
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-918000: (5.643697658s)
--- PASS: TestAddons/parallel/InspektorGadget (10.65s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.587511ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-4k7vz" [906fee52-1f0b-46f7-a88d-aa43570f58dc] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004034194s
addons_test.go:413: (dbg) Run:  kubectl --context addons-918000 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.61s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 15:55:48.706844   40830 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 15:55:48.711190   40830 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 15:55:48.711203   40830 kapi.go:107] duration metric: took 4.370697ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 4.379947ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-918000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-918000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [28df43e9-89f2-4f66-a098-34533c70b9b7] Pending
helpers_test.go:344: "task-pv-pod" [28df43e9-89f2-4f66-a098-34533c70b9b7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [28df43e9-89f2-4f66-a098-34533c70b9b7] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.005325623s
addons_test.go:528: (dbg) Run:  kubectl --context addons-918000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-918000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-918000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-918000 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-918000 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-918000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-918000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [bf0505a4-c68f-4dcf-b27e-27c2bda06d08] Pending
helpers_test.go:344: "task-pv-pod-restore" [bf0505a4-c68f-4dcf-b27e-27c2bda06d08] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [bf0505a4-c68f-4dcf-b27e-27c2bda06d08] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003963244s
addons_test.go:570: (dbg) Run:  kubectl --context addons-918000 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-918000 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-918000 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-darwin-amd64 -p addons-918000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.572078259s)
addons_test.go:586: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (55.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-918000 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-918000 --alsologtostderr -v=1: (1.039693664s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-lss9w" [99388f0d-a21a-4f10-91c1-a5ee29ddeed3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-lss9w" [99388f0d-a21a-4f10-91c1-a5ee29ddeed3] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004727571s
addons_test.go:777: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-darwin-amd64 -p addons-918000 addons disable headlamp --alsologtostderr -v=1: (5.640721146s)
--- PASS: TestAddons/parallel/Headlamp (19.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-t4tvj" [9a54422a-b386-478c-9c35-8bdf9f96d39c] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005920867s
addons_test.go:808: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-918000
--- PASS: TestAddons/parallel/CloudSpanner (5.53s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-918000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-918000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-918000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [21855c56-f8b8-4185-b391-97b19f39e9cb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [21855c56-f8b8-4185-b391-97b19f39e9cb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [21855c56-f8b8-4185-b391-97b19f39e9cb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005608751s
addons_test.go:938: (dbg) Run:  kubectl --context addons-918000 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 ssh "cat /opt/local-path-provisioner/pvc-1e953f3f-0f81-401a-ab56-c6fd2854bea4_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-918000 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-918000 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-darwin-amd64 -p addons-918000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.981785536s)
--- PASS: TestAddons/parallel/LocalPath (52.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jlk6j" [b183cbd1-aab3-4216-a691-a6d5e8512427] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004756135s
addons_test.go:1002: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-918000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.49s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.62s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-j28qv" [83b80a5d-7ec3-45c8-9093-a863384c3068] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004262277s
addons_test.go:1014: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-darwin-amd64 -p addons-918000 addons disable yakd --alsologtostderr -v=1: (5.616024065s)
--- PASS: TestAddons/parallel/Yakd (10.62s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-918000
addons_test.go:170: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-918000: (10.818486996s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-918000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-918000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-918000
--- PASS: TestAddons/StoppedEnableDisable (11.40s)

                                                
                                    
x
+
TestCertOptions (22.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-931000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-931000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (19.983726951s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-931000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I0920 16:28:37.887573   40830 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-931000
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-931000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-931000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-931000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-931000: (1.859913692s)
--- PASS: TestCertOptions (22.40s)

                                                
                                    
x
+
TestCertExpiration (224.79s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-066000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-066000 --memory=2048 --cert-expiration=3m --driver=docker : (21.120759083s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-066000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0920 16:31:34.836602   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-066000 --memory=2048 --cert-expiration=8760h --driver=docker : (21.562025306s)
helpers_test.go:175: Cleaning up "cert-expiration-066000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-066000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-066000: (2.106383048s)
--- PASS: TestCertExpiration (224.79s)

                                                
                                    
x
+
TestDockerFlags (21.98s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-070000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-070000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (19.538919035s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-070000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-070000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-070000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-070000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-070000: (1.869013975s)
--- PASS: TestDockerFlags (21.98s)

                                                
                                    
x
+
TestForceSystemdFlag (23.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-664000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-664000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (21.228541561s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-664000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-664000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-664000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-664000: (2.086714543s)
--- PASS: TestForceSystemdFlag (23.63s)

                                                
                                    
x
+
TestForceSystemdEnv (21.23s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-246000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-246000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (18.931912993s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-246000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-246000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-246000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-246000: (2.010021878s)
--- PASS: TestForceSystemdEnv (21.23s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.85s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I0920 16:27:19.747755   40830 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 16:27:19.748013   40830 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
W0920 16:27:20.564081   40830 install.go:62] docker-machine-driver-hyperkit: exit status 1
W0920 16:27:20.564290   40830 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0920 16:27:20.564340   40830 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/001/docker-machine-driver-hyperkit
I0920 16:27:21.048773   40830 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10141740 0x10141740 0x10141740 0x10141740 0x10141740 0x10141740 0x10141740] Decompressors:map[bz2:0xc000467840 gz:0xc000467848 tar:0xc0004677f0 tar.bz2:0xc000467800 tar.gz:0xc000467810 tar.xz:0xc000467820 tar.zst:0xc000467830 tbz2:0xc000467800 tgz:0xc000467810 txz:0xc000467820 tzst:0xc000467830 xz:0xc000467850 zip:0xc000467860 zst:0xc000467858] Getters:map[file:0xc0006bbd00 http:0xc00075bae0 https:0xc00075bb30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}
: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0920 16:27:21.048859   40830 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/001/docker-machine-driver-hyperkit
I0920 16:27:24.220054   40830 install.go:79] stdout: 
W0920 16:27:24.220201   40830 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0920 16:27:24.220230   40830 install.go:99] testing: [sudo -n chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/001/docker-machine-driver-hyperkit]
I0920 16:27:24.236135   40830 install.go:106] running: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/001/docker-machine-driver-hyperkit]
I0920 16:27:24.251926   40830 install.go:99] testing: [sudo -n chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/001/docker-machine-driver-hyperkit]
I0920 16:27:24.266892   40830 install.go:106] running: [sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/001/docker-machine-driver-hyperkit]
I0920 16:27:24.296447   40830 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0920 16:27:24.296606   40830 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
I0920 16:27:25.051149   40830 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W0920 16:27:25.051175   40830 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W0920 16:27:25.051236   40830 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I0920 16:27:25.051283   40830 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/002/docker-machine-driver-hyperkit
I0920 16:27:25.455975   40830 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-amd64.sha256 Dst:/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10141740 0x10141740 0x10141740 0x10141740 0x10141740 0x10141740 0x10141740] Decompressors:map[bz2:0xc000467840 gz:0xc000467848 tar:0xc0004677f0 tar.bz2:0xc000467800 tar.gz:0xc000467810 tar.xz:0xc000467820 tar.zst:0xc000467830 tbz2:0xc000467800 tgz:0xc000467810 txz:0xc000467820 tzst:0xc000467830 xz:0xc000467850 zip:0xc000467860 zst:0xc000467858] Getters:map[file:0xc0007fd050 http:0xc0006c2f00 https:0xc0006c2f50] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}
: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I0920 16:27:25.456043   40830 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/002/docker-machine-driver-hyperkit
I0920 16:27:28.521815   40830 install.go:79] stdout: 
W0920 16:27:28.521941   40830 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I0920 16:27:28.521967   40830 install.go:99] testing: [sudo -n chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/002/docker-machine-driver-hyperkit]
I0920 16:27:28.537768   40830 install.go:106] running: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/002/docker-machine-driver-hyperkit]
I0920 16:27:28.554322   40830 install.go:99] testing: [sudo -n chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/002/docker-machine-driver-hyperkit]
I0920 16:27:28.569248   40830 install.go:106] running: [sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperKitDriverInstallOrUpdate1801094318/002/docker-machine-driver-hyperkit]
--- PASS: TestHyperKitDriverInstallOrUpdate (8.85s)

                                                
                                    
x
+
TestErrorSpam/setup (18.83s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-873000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-873000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 --driver=docker : (18.830109511s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.1."
--- PASS: TestErrorSpam/setup (18.83s)

                                                
                                    
x
+
TestErrorSpam/start (2.2s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 start --dry-run
--- PASS: TestErrorSpam/start (2.20s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 pause
--- PASS: TestErrorSpam/pause (1.41s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.48s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 unpause
--- PASS: TestErrorSpam/unpause (1.48s)

                                                
                                    
x
+
TestErrorSpam/stop (11.22s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 stop: (10.7013413s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-873000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-873000 stop
--- PASS: TestErrorSpam/stop (11.22s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19672-40263/.minikube/files/etc/test/nested/copy/40830/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (61.58s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-490000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-490000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m1.577790327s)
--- PASS: TestFunctional/serial/StartWithProxy (61.58s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 15:58:37.630296   40830 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-490000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-490000 --alsologtostderr -v=8: (37.055973392s)
functional_test.go:663: soft start took 37.056533788s for "functional-490000" cluster.
I0920 15:59:14.698582   40830 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (37.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-490000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-490000 cache add registry.k8s.io/pause:3.1: (1.1023408s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-490000 cache add registry.k8s.io/pause:3.3: (1.138823895s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local883675025/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 cache add minikube-local-cache-test:functional-490000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-amd64 -p functional-490000 cache add minikube-local-cache-test:functional-490000: (1.004760952s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 cache delete minikube-local-cache-test:functional-490000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-490000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (259.44099ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 kubectl -- --context functional-490000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-490000 kubectl -- --context functional-490000 get pods: (1.208555767s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-490000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-490000 get pods: (1.570883798s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-490000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-490000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.178924916s)
functional_test.go:761: restart took 40.17905219s for "functional-490000" cluster.
I0920 16:00:04.496093   40830 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (40.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-490000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-490000 logs: (3.003531634s)
--- PASS: TestFunctional/serial/LogsCmd (3.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd414665480/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-490000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd414665480/001/logs.txt: (2.839939038s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.84s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-490000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-490000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-490000: exit status 115 (398.472883ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31766 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-490000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 config get cpus: exit status 14 (59.949267ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 config get cpus: exit status 14 (57.196508ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-490000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-490000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 42878: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-490000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-490000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (723.138066ms)

                                                
                                                
-- stdout --
	* [functional-490000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 16:01:34.149362   42800 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:01:34.150064   42800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:01:34.150073   42800 out.go:358] Setting ErrFile to fd 2...
	I0920 16:01:34.150079   42800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:01:34.150578   42800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
	I0920 16:01:34.152181   42800 out.go:352] Setting JSON to false
	I0920 16:01:34.175979   42800 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":23457,"bootTime":1726849837,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 16:01:34.176123   42800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 16:01:34.197414   42800 out.go:177] * [functional-490000] minikube v1.34.0 on Darwin 14.6.1
	I0920 16:01:34.239887   42800 notify.go:220] Checking for updates...
	I0920 16:01:34.261515   42800 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 16:01:34.303495   42800 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	I0920 16:01:34.345467   42800 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 16:01:34.387314   42800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:01:34.429535   42800 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	I0920 16:01:34.471435   42800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 16:01:34.493027   42800 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:01:34.493580   42800 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:01:34.518385   42800 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0920 16:01:34.518583   42800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:01:34.604801   42800 info.go:266] docker info: {ID:5cf611e6-fa9d-4ecb-b0dd-438e8c824220 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:80 SystemTime:2024-09-20 23:01:34.595027442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:11 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:8220102656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0920 16:01:34.626652   42800 out.go:177] * Using the docker driver based on existing profile
	I0920 16:01:34.668752   42800 start.go:297] selected driver: docker
	I0920 16:01:34.668783   42800 start.go:901] validating driver "docker" against &{Name:functional-490000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-490000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:01:34.668903   42800 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 16:01:34.694667   42800 out.go:201] 
	W0920 16:01:34.715753   42800 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 16:01:34.752764   42800 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-490000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-490000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-490000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (581.23497ms)

                                                
                                                
-- stdout --
	* [functional-490000] minikube v1.34.0 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 16:01:35.562835   42855 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:01:35.562985   42855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:01:35.562989   42855 out.go:358] Setting ErrFile to fd 2...
	I0920 16:01:35.562993   42855 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:01:35.563167   42855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
	I0920 16:01:35.564745   42855 out.go:352] Setting JSON to false
	I0920 16:01:35.587739   42855 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":23458,"bootTime":1726849837,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0920 16:01:35.587831   42855 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0920 16:01:35.609245   42855 out.go:177] * [functional-490000] minikube v1.34.0 sur Darwin 14.6.1
	I0920 16:01:35.651536   42855 out.go:177]   - MINIKUBE_LOCATION=19672
	I0920 16:01:35.651606   42855 notify.go:220] Checking for updates...
	I0920 16:01:35.693318   42855 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	I0920 16:01:35.714362   42855 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0920 16:01:35.735417   42855 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 16:01:35.756495   42855 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	I0920 16:01:35.798330   42855 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 16:01:35.819953   42855 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:01:35.820725   42855 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 16:01:35.844848   42855 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0920 16:01:35.845008   42855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 16:01:35.925650   42855 info.go:266] docker info: {ID:5cf611e6-fa9d-4ecb-b0dd-438e8c824220 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:80 SystemTime:2024-09-20 23:01:35.916023224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:11 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:h
ttps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:8220102656 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0920 16:01:35.947561   42855 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 16:01:35.969050   42855 start.go:297] selected driver: docker
	I0920 16:01:35.969083   42855 start.go:901] validating driver "docker" against &{Name:functional-490000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-490000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 16:01:35.969203   42855 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 16:01:35.993986   42855 out.go:201] 
	W0920 16:01:36.014981   42855 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 16:01:36.037745   42855 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [63aba758-9e26-448a-ba83-ce59fe28281e] Running
E0920 16:01:00.376399   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004617181s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-490000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-490000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-490000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-490000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ac66ceb7-8041-482f-becd-51914298393f] Pending
helpers_test.go:344: "sp-pod" [ac66ceb7-8041-482f-becd-51914298393f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ac66ceb7-8041-482f-becd-51914298393f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.005537372s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-490000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-490000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-490000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [30aba7f5-d053-43d2-adbe-db9e910835b8] Pending
helpers_test.go:344: "sp-pod" [30aba7f5-d053-43d2-adbe-db9e910835b8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [30aba7f5-d053-43d2-adbe-db9e910835b8] Running
E0920 16:01:20.858440   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.006315394s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-490000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.09s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh -n functional-490000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 cp functional-490000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd2579951859/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh -n functional-490000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh -n functional-490000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-490000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-47rcr" [fb1ab027-05ce-4d8f-af56-fa84a210864e] Pending
helpers_test.go:344: "mysql-6cdb49bbb-47rcr" [fb1ab027-05ce-4d8f-af56-fa84a210864e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-47rcr" [fb1ab027-05ce-4d8f-af56-fa84a210864e] Running
E0920 16:00:39.878843   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:00:39.885144   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:00:39.898166   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:00:39.919384   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:00:39.960946   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:00:40.042213   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:00:40.203503   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:00:40.525093   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:00:41.166537   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.004461692s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-490000 exec mysql-6cdb49bbb-47rcr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-490000 exec mysql-6cdb49bbb-47rcr -- mysql -ppassword -e "show databases;": exit status 1 (126.409837ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 16:00:42.963669   40830 retry.go:31] will retry after 868.405132ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-490000 exec mysql-6cdb49bbb-47rcr -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-490000 exec mysql-6cdb49bbb-47rcr -- mysql -ppassword -e "show databases;": exit status 1 (105.235355ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 16:00:43.937947   40830 retry.go:31] will retry after 2.08652956s: exit status 1
E0920 16:00:45.010928   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1807: (dbg) Run:  kubectl --context functional-490000 exec mysql-6cdb49bbb-47rcr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/40830/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo cat /etc/test/nested/copy/40830/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/40830.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo cat /etc/ssl/certs/40830.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/40830.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo cat /usr/share/ca-certificates/40830.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/408302.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo cat /etc/ssl/certs/408302.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/408302.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo cat /usr/share/ca-certificates/408302.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-490000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 ssh "sudo systemctl is-active crio": exit status 1 (289.401762ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-490000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-490000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-490000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-490000 image ls --format short --alsologtostderr:
I0920 16:01:45.963858   43038 out.go:345] Setting OutFile to fd 1 ...
I0920 16:01:45.964083   43038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:45.964089   43038 out.go:358] Setting ErrFile to fd 2...
I0920 16:01:45.964100   43038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:45.964319   43038 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
I0920 16:01:45.965017   43038 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:45.965118   43038 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:45.965599   43038 cli_runner.go:164] Run: docker container inspect functional-490000 --format={{.State.Status}}
I0920 16:01:45.985256   43038 ssh_runner.go:195] Run: systemctl --version
I0920 16:01:45.985347   43038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-490000
I0920 16:01:46.005398   43038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61769 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/functional-490000/id_rsa Username:docker}
I0920 16:01:46.094167   43038 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-490000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-490000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 175ffd71cce3d | 88.4MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 9aa1fad941575 | 67.4MB |
| docker.io/library/nginx                     | latest            | 39286ab8a5e14 | 188MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | 6bab7719df100 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 60c005f310ff3 | 91.5MB |
| docker.io/library/nginx                     | alpine            | c7b4f26a7d93f | 43.2MB |
| localhost/my-image                          | functional-490000 | 38a9aaf7407c1 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-490000 | b342124bb4f46 | 30B    |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-490000 image ls --format table --alsologtostderr:
I0920 16:01:49.430203   43065 out.go:345] Setting OutFile to fd 1 ...
I0920 16:01:49.430395   43065 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:49.430400   43065 out.go:358] Setting ErrFile to fd 2...
I0920 16:01:49.430404   43065 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:49.430578   43065 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
I0920 16:01:49.431246   43065 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:49.431341   43065 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:49.431780   43065 cli_runner.go:164] Run: docker container inspect functional-490000 --format={{.State.Status}}
I0920 16:01:49.450073   43065 ssh_runner.go:195] Run: systemctl --version
I0920 16:01:49.450161   43065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-490000
I0920 16:01:49.468336   43065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61769 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/functional-490000/id_rsa Username:docker}
I0920 16:01:49.560070   43065 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/09/20 16:01:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-490000 image ls --format json --alsologtostderr:
[{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"88400000"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"67400000"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"38a9aaf7407c1a38ee529a7cd4cf99393c01e291ba4f3
c3be9e660f0065ff802","repoDigests":[],"repoTags":["localhost/my-image:functional-490000"],"size":"1240000"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"91500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"b342124bb4f46a3e292e06e6fedcb3c000cb54a7b3f3e3d7445612f7d4113376","repoDigests":[],"repoTags":["docker.io/library/minikube-local-
cache-test:functional-490000"],"size":"30"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"94200000"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-490000"],"size":"4940000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"siz
e":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-490000 image ls --format json --alsologtostderr:
I0920 16:01:49.197027   43061 out.go:345] Setting OutFile to fd 1 ...
I0920 16:01:49.197322   43061 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:49.197327   43061 out.go:358] Setting ErrFile to fd 2...
I0920 16:01:49.197331   43061 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:49.197517   43061 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
I0920 16:01:49.198237   43061 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:49.198335   43061 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:49.198754   43061 cli_runner.go:164] Run: docker container inspect functional-490000 --format={{.State.Status}}
I0920 16:01:49.217298   43061 ssh_runner.go:195] Run: systemctl --version
I0920 16:01:49.217385   43061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-490000
I0920 16:01:49.236017   43061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61769 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/functional-490000/id_rsa Username:docker}
I0920 16:01:49.326246   43061 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-490000 image ls --format yaml --alsologtostderr:
- id: b342124bb4f46a3e292e06e6fedcb3c000cb54a7b3f3e3d7445612f7d4113376
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-490000
size: "30"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-490000
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 38a9aaf7407c1a38ee529a7cd4cf99393c01e291ba4f3c3be9e660f0065ff802
repoDigests: []
repoTags:
- localhost/my-image:functional-490000
size: "1240000"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "94200000"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "67400000"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "88400000"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "91500000"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-490000 image ls --format yaml --alsologtostderr:
I0920 16:01:48.962306   43057 out.go:345] Setting OutFile to fd 1 ...
I0920 16:01:48.963088   43057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:48.963106   43057 out.go:358] Setting ErrFile to fd 2...
I0920 16:01:48.963112   43057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:48.963584   43057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
I0920 16:01:48.964254   43057 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:48.964349   43057 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:48.964760   43057 cli_runner.go:164] Run: docker container inspect functional-490000 --format={{.State.Status}}
I0920 16:01:48.983471   43057 ssh_runner.go:195] Run: systemctl --version
I0920 16:01:48.983568   43057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-490000
I0920 16:01:49.001778   43057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61769 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/functional-490000/id_rsa Username:docker}
I0920 16:01:49.093769   43057 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 ssh pgrep buildkitd: exit status 1 (232.246394ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image build -t localhost/my-image:functional-490000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-490000 image build -t localhost/my-image:functional-490000 testdata/build --alsologtostderr: (2.305516193s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-490000 image build -t localhost/my-image:functional-490000 testdata/build --alsologtostderr:
I0920 16:01:46.430812   43049 out.go:345] Setting OutFile to fd 1 ...
I0920 16:01:46.431688   43049 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:46.431696   43049 out.go:358] Setting ErrFile to fd 2...
I0920 16:01:46.431700   43049 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 16:01:46.431895   43049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
I0920 16:01:46.432618   43049 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:46.433393   43049 config.go:182] Loaded profile config "functional-490000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0920 16:01:46.433836   43049 cli_runner.go:164] Run: docker container inspect functional-490000 --format={{.State.Status}}
I0920 16:01:46.453307   43049 ssh_runner.go:195] Run: systemctl --version
I0920 16:01:46.453405   43049 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-490000
I0920 16:01:46.473164   43049 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61769 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/functional-490000/id_rsa Username:docker}
I0920 16:01:46.561898   43049 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3027272140.tar
I0920 16:01:46.561997   43049 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 16:01:46.571022   43049 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3027272140.tar
I0920 16:01:46.575436   43049 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3027272140.tar: stat -c "%s %y" /var/lib/minikube/build/build.3027272140.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3027272140.tar': No such file or directory
I0920 16:01:46.575482   43049 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3027272140.tar --> /var/lib/minikube/build/build.3027272140.tar (3072 bytes)
I0920 16:01:46.599184   43049 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3027272140
I0920 16:01:46.609007   43049 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3027272140 -xf /var/lib/minikube/build/build.3027272140.tar
I0920 16:01:46.618993   43049 docker.go:360] Building image: /var/lib/minikube/build/build.3027272140
I0920 16:01:46.619090   43049 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-490000 /var/lib/minikube/build/build.3027272140
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.4s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:38a9aaf7407c1a38ee529a7cd4cf99393c01e291ba4f3c3be9e660f0065ff802 done
#8 naming to localhost/my-image:functional-490000 done
#8 DONE 0.0s
I0920 16:01:48.626670   43049 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-490000 /var/lib/minikube/build/build.3027272140: (2.007523958s)
I0920 16:01:48.626741   43049 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3027272140
I0920 16:01:48.635270   43049 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3027272140.tar
I0920 16:01:48.643365   43049 build_images.go:217] Built localhost/my-image:functional-490000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.3027272140.tar
I0920 16:01:48.643390   43049 build_images.go:133] succeeded building to: functional-490000
I0920 16:01:48.643395   43049 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.769908691s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-490000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-490000 docker-env) && out/minikube-darwin-amd64 status -p functional-490000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-490000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image load --daemon kicbase/echo-server:functional-490000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image load --daemon kicbase/echo-server:functional-490000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-490000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image load --daemon kicbase/echo-server:functional-490000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image save kicbase/echo-server:functional-490000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image rm kicbase/echo-server:functional-490000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-490000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 image save --daemon kicbase/echo-server:functional-490000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-490000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (23.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-490000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-490000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-hzkbd" [725902ca-fe90-47dc-9a0c-26bd4e3d6595] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-hzkbd" [725902ca-fe90-47dc-9a0c-26bd4e3d6595] Running
E0920 16:00:42.448838   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 23.00653671s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (23.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-490000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-490000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-490000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 42638: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-490000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 service list -o json
functional_test.go:1494: Took "579.795628ms" to run "out/minikube-darwin-amd64 -p functional-490000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-490000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-490000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [dc8622c7-9d95-47e5-8211-a22b47b57dd8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [dc8622c7-9d95-47e5-8211-a22b47b57dd8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.005611432s
I0920 16:00:57.284942   40830 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 service --namespace=default --https --url hello-node
E0920 16:00:50.133824   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 service --namespace=default --https --url hello-node: signal: killed (15.002250535s)

                                                
                                                
-- stdout --
	https://127.0.0.1:62027

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1522: found endpoint: https://127.0.0.1:62027
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-490000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-490000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 42667: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 service hello-node --url --format={{.IP}}: signal: killed (15.003087306s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 service hello-node --url: signal: killed (15.00283428s)

                                                
                                                
-- stdout --
	http://127.0.0.1:62093

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1565: found endpoint for hello-node: http://127.0.0.1:62093
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2228791223/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726873292814000000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2228791223/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726873292814000000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2228791223/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726873292814000000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2228791223/001/test-1726873292814000000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (286.868101ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 16:01:33.101582   40830 retry.go:31] will retry after 377.421727ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 23:01 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 23:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 23:01 test-1726873292814000000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh cat /mount-9p/test-1726873292814000000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-490000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ce034463-643c-43a0-8c7b-0af7acaad41a] Pending
helpers_test.go:344: "busybox-mount" [ce034463-643c-43a0-8c7b-0af7acaad41a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ce034463-643c-43a0-8c7b-0af7acaad41a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ce034463-643c-43a0-8c7b-0af7acaad41a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005200762s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-490000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2228791223/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.92s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "501.178616ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "131.215763ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "314.795517ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "84.458117ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port3221918382/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.86148ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 16:01:40.984108   40830 retry.go:31] will retry after 483.808245ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port3221918382/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 ssh "sudo umount -f /mount-9p": exit status 1 (277.917432ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-490000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port3221918382/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1649925752/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1649925752/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1649925752/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T" /mount1: exit status 1 (357.138462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 16:01:42.881634   40830 retry.go:31] will retry after 608.949099ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-490000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-490000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1649925752/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1649925752/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-490000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1649925752/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-490000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-490000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-490000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (88.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-833000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0920 16:02:01.821161   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-833000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m27.730006545s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (88.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- rollout status deployment/busybox
E0920 16:03:23.744439   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-833000 -- rollout status deployment/busybox: (3.448359555s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-kmmzl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-l6n76 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-qwqtz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-kmmzl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-l6n76 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-qwqtz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-kmmzl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-l6n76 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-qwqtz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-kmmzl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-kmmzl -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-l6n76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-l6n76 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-qwqtz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-833000 -- exec busybox-7dff88458-qwqtz -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (16.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-833000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-833000 -v=7 --alsologtostderr: (15.654296971s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (16.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-833000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp testdata/cp-test.txt ha-833000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile2100893445/001/cp-test_ha-833000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000:/home/docker/cp-test.txt ha-833000-m02:/home/docker/cp-test_ha-833000_ha-833000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m02 "sudo cat /home/docker/cp-test_ha-833000_ha-833000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000:/home/docker/cp-test.txt ha-833000-m03:/home/docker/cp-test_ha-833000_ha-833000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m03 "sudo cat /home/docker/cp-test_ha-833000_ha-833000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000:/home/docker/cp-test.txt ha-833000-m04:/home/docker/cp-test_ha-833000_ha-833000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m04 "sudo cat /home/docker/cp-test_ha-833000_ha-833000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp testdata/cp-test.txt ha-833000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile2100893445/001/cp-test_ha-833000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m02:/home/docker/cp-test.txt ha-833000:/home/docker/cp-test_ha-833000-m02_ha-833000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000 "sudo cat /home/docker/cp-test_ha-833000-m02_ha-833000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m02:/home/docker/cp-test.txt ha-833000-m03:/home/docker/cp-test_ha-833000-m02_ha-833000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m03 "sudo cat /home/docker/cp-test_ha-833000-m02_ha-833000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m02:/home/docker/cp-test.txt ha-833000-m04:/home/docker/cp-test_ha-833000-m02_ha-833000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m04 "sudo cat /home/docker/cp-test_ha-833000-m02_ha-833000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp testdata/cp-test.txt ha-833000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile2100893445/001/cp-test_ha-833000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m03:/home/docker/cp-test.txt ha-833000:/home/docker/cp-test_ha-833000-m03_ha-833000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000 "sudo cat /home/docker/cp-test_ha-833000-m03_ha-833000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m03:/home/docker/cp-test.txt ha-833000-m02:/home/docker/cp-test_ha-833000-m03_ha-833000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m02 "sudo cat /home/docker/cp-test_ha-833000-m03_ha-833000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m03:/home/docker/cp-test.txt ha-833000-m04:/home/docker/cp-test_ha-833000-m03_ha-833000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m04 "sudo cat /home/docker/cp-test_ha-833000-m03_ha-833000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp testdata/cp-test.txt ha-833000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile2100893445/001/cp-test_ha-833000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m04:/home/docker/cp-test.txt ha-833000:/home/docker/cp-test_ha-833000-m04_ha-833000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000 "sudo cat /home/docker/cp-test_ha-833000-m04_ha-833000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m04:/home/docker/cp-test.txt ha-833000-m02:/home/docker/cp-test_ha-833000-m04_ha-833000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m02 "sudo cat /home/docker/cp-test_ha-833000-m04_ha-833000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 cp ha-833000-m04:/home/docker/cp-test.txt ha-833000-m03:/home/docker/cp-test_ha-833000-m04_ha-833000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 ssh -n ha-833000-m03 "sudo cat /home/docker/cp-test_ha-833000-m04_ha-833000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-833000 node stop m02 -v=7 --alsologtostderr: (10.764410008s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr: exit status 7 (655.031398ms)

                                                
                                                
-- stdout --
	ha-833000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-833000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-833000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 16:04:14.413638   43826 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:04:14.413829   43826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:04:14.413835   43826 out.go:358] Setting ErrFile to fd 2...
	I0920 16:04:14.413839   43826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:04:14.414025   43826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
	I0920 16:04:14.414209   43826 out.go:352] Setting JSON to false
	I0920 16:04:14.414230   43826 mustload.go:65] Loading cluster: ha-833000
	I0920 16:04:14.414273   43826 notify.go:220] Checking for updates...
	I0920 16:04:14.414625   43826 config.go:182] Loaded profile config "ha-833000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:04:14.414649   43826 status.go:174] checking status of ha-833000 ...
	I0920 16:04:14.415177   43826 cli_runner.go:164] Run: docker container inspect ha-833000 --format={{.State.Status}}
	I0920 16:04:14.434047   43826 status.go:364] ha-833000 host status = "Running" (err=<nil>)
	I0920 16:04:14.434102   43826 host.go:66] Checking if "ha-833000" exists ...
	I0920 16:04:14.434400   43826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-833000
	I0920 16:04:14.453302   43826 host.go:66] Checking if "ha-833000" exists ...
	I0920 16:04:14.453573   43826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 16:04:14.453654   43826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-833000
	I0920 16:04:14.472058   43826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62237 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/ha-833000/id_rsa Username:docker}
	I0920 16:04:14.563215   43826 ssh_runner.go:195] Run: systemctl --version
	I0920 16:04:14.567669   43826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:04:14.577896   43826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-833000
	I0920 16:04:14.597080   43826 kubeconfig.go:125] found "ha-833000" server: "https://127.0.0.1:62236"
	I0920 16:04:14.597113   43826 api_server.go:166] Checking apiserver status ...
	I0920 16:04:14.597171   43826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:04:14.607994   43826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2310/cgroup
	W0920 16:04:14.617050   43826 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2310/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 16:04:14.617116   43826 ssh_runner.go:195] Run: ls
	I0920 16:04:14.621803   43826 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62236/healthz ...
	I0920 16:04:14.626933   43826 api_server.go:279] https://127.0.0.1:62236/healthz returned 200:
	ok
	I0920 16:04:14.626948   43826 status.go:456] ha-833000 apiserver status = Running (err=<nil>)
	I0920 16:04:14.626958   43826 status.go:176] ha-833000 status: &{Name:ha-833000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 16:04:14.626970   43826 status.go:174] checking status of ha-833000-m02 ...
	I0920 16:04:14.627234   43826 cli_runner.go:164] Run: docker container inspect ha-833000-m02 --format={{.State.Status}}
	I0920 16:04:14.645696   43826 status.go:364] ha-833000-m02 host status = "Stopped" (err=<nil>)
	I0920 16:04:14.645730   43826 status.go:377] host is not running, skipping remaining checks
	I0920 16:04:14.645741   43826 status.go:176] ha-833000-m02 status: &{Name:ha-833000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 16:04:14.645761   43826 status.go:174] checking status of ha-833000-m03 ...
	I0920 16:04:14.646118   43826 cli_runner.go:164] Run: docker container inspect ha-833000-m03 --format={{.State.Status}}
	I0920 16:04:14.664707   43826 status.go:364] ha-833000-m03 host status = "Running" (err=<nil>)
	I0920 16:04:14.664732   43826 host.go:66] Checking if "ha-833000-m03" exists ...
	I0920 16:04:14.665021   43826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-833000-m03
	I0920 16:04:14.683435   43826 host.go:66] Checking if "ha-833000-m03" exists ...
	I0920 16:04:14.683701   43826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 16:04:14.683765   43826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-833000-m03
	I0920 16:04:14.702677   43826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62340 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/ha-833000-m03/id_rsa Username:docker}
	I0920 16:04:14.794518   43826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:04:14.805081   43826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-833000
	I0920 16:04:14.825192   43826 kubeconfig.go:125] found "ha-833000" server: "https://127.0.0.1:62236"
	I0920 16:04:14.825215   43826 api_server.go:166] Checking apiserver status ...
	I0920 16:04:14.825282   43826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:04:14.835945   43826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2176/cgroup
	W0920 16:04:14.845037   43826 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2176/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 16:04:14.845099   43826 ssh_runner.go:195] Run: ls
	I0920 16:04:14.849124   43826 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:62236/healthz ...
	I0920 16:04:14.852969   43826 api_server.go:279] https://127.0.0.1:62236/healthz returned 200:
	ok
	I0920 16:04:14.852981   43826 status.go:456] ha-833000-m03 apiserver status = Running (err=<nil>)
	I0920 16:04:14.852992   43826 status.go:176] ha-833000-m03 status: &{Name:ha-833000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 16:04:14.853002   43826 status.go:174] checking status of ha-833000-m04 ...
	I0920 16:04:14.853271   43826 cli_runner.go:164] Run: docker container inspect ha-833000-m04 --format={{.State.Status}}
	I0920 16:04:14.871694   43826 status.go:364] ha-833000-m04 host status = "Running" (err=<nil>)
	I0920 16:04:14.871721   43826 host.go:66] Checking if "ha-833000-m04" exists ...
	I0920 16:04:14.871997   43826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-833000-m04
	I0920 16:04:14.890314   43826 host.go:66] Checking if "ha-833000-m04" exists ...
	I0920 16:04:14.890597   43826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 16:04:14.890658   43826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-833000-m04
	I0920 16:04:14.909179   43826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62463 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/ha-833000-m04/id_rsa Username:docker}
	I0920 16:04:14.999418   43826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:04:15.009790   43826 status.go:176] ha-833000-m04 status: &{Name:ha-833000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (39.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-833000 node start m02 -v=7 --alsologtostderr: (38.527680575s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr: (1.142066566s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (39.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (228.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-833000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-833000 -v=7 --alsologtostderr
E0920 16:05:19.838770   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:19.846671   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:19.859700   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:19.883269   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:19.926211   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:20.009811   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:20.172642   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:20.494682   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-833000 -v=7 --alsologtostderr: (24.417606052s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-833000 --wait=true -v=7 --alsologtostderr
E0920 16:05:21.137365   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:22.419041   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:24.982786   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:30.104593   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:39.887118   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:05:40.347853   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:06:00.831030   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:06:07.589116   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:06:41.793850   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:08:03.718103   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-833000 --wait=true -v=7 --alsologtostderr: (3m23.631924096s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-833000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (228.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-833000 node delete m03 -v=7 --alsologtostderr: (8.67166411s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-833000 stop -v=7 --alsologtostderr: (32.470784855s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr: exit status 7 (115.278422ms)

                                                
                                                
-- stdout --
	ha-833000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-833000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 16:09:27.181283   44223 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:09:27.181547   44223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:09:27.181553   44223 out.go:358] Setting ErrFile to fd 2...
	I0920 16:09:27.181556   44223 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:09:27.181729   44223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
	I0920 16:09:27.181916   44223 out.go:352] Setting JSON to false
	I0920 16:09:27.181937   44223 mustload.go:65] Loading cluster: ha-833000
	I0920 16:09:27.181980   44223 notify.go:220] Checking for updates...
	I0920 16:09:27.182243   44223 config.go:182] Loaded profile config "ha-833000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:09:27.182265   44223 status.go:174] checking status of ha-833000 ...
	I0920 16:09:27.183059   44223 cli_runner.go:164] Run: docker container inspect ha-833000 --format={{.State.Status}}
	I0920 16:09:27.201518   44223 status.go:364] ha-833000 host status = "Stopped" (err=<nil>)
	I0920 16:09:27.201570   44223 status.go:377] host is not running, skipping remaining checks
	I0920 16:09:27.201578   44223 status.go:176] ha-833000 status: &{Name:ha-833000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 16:09:27.201626   44223 status.go:174] checking status of ha-833000-m02 ...
	I0920 16:09:27.201944   44223 cli_runner.go:164] Run: docker container inspect ha-833000-m02 --format={{.State.Status}}
	I0920 16:09:27.220075   44223 status.go:364] ha-833000-m02 host status = "Stopped" (err=<nil>)
	I0920 16:09:27.220097   44223 status.go:377] host is not running, skipping remaining checks
	I0920 16:09:27.220101   44223 status.go:176] ha-833000-m02 status: &{Name:ha-833000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 16:09:27.220112   44223 status.go:174] checking status of ha-833000-m04 ...
	I0920 16:09:27.220382   44223 cli_runner.go:164] Run: docker container inspect ha-833000-m04 --format={{.State.Status}}
	I0920 16:09:27.238290   44223 status.go:364] ha-833000-m04 host status = "Stopped" (err=<nil>)
	I0920 16:09:27.238314   44223 status.go:377] host is not running, skipping remaining checks
	I0920 16:09:27.238318   44223 status.go:176] ha-833000-m04 status: &{Name:ha-833000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-833000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0920 16:10:19.845113   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:10:39.892864   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:10:47.563901   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-833000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m20.980938535s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (34.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-833000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-833000 --control-plane -v=7 --alsologtostderr: (34.03764874s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-833000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (34.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (18.35s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-679000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-679000 --driver=docker : (18.348470427s)
--- PASS: TestImageBuild/serial/Setup (18.35s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-679000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-679000: (1.809854703s)
--- PASS: TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-679000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-679000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.63s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.88s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-679000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (32.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-985000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-985000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (32.814894981s)
--- PASS: TestJSONOutput/start/Command (32.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-985000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-985000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.69s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-985000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-985000 --output=json --user=testUser: (10.687279426s)
--- PASS: TestJSONOutput/stop/Command (10.69s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-674000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-674000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (360.151521ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8433879b-f6d5-4d99-b5d0-a639436d350d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-674000] minikube v1.34.0 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"82123a98-59bf-4955-a7c6-02acf65578a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"a0cf6b93-ee33-4ad8-82bc-9409a9e9ddd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig"}}
	{"specversion":"1.0","id":"a761b58d-fd0e-4b0d-9992-8dc2b77bde26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"a2caaf77-521c-47b0-b07b-866069da916a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f029ed02-7437-4d0a-b392-5ee98f547c1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube"}}
	{"specversion":"1.0","id":"564ef9c8-8b9d-4cf2-b6a5-5f4f6e63372c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8597afe0-86d4-4496-bb7e-d560dd8082c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-674000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-674000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (20.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-598000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-598000 --network=: (18.588497364s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-598000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-598000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-598000: (1.96157427s)
--- PASS: TestKicCustomNetwork/create_custom_network (20.57s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (21.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-636000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-636000 --network=bridge: (19.759779164s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-636000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-636000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-636000: (1.863050861s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (21.65s)

                                                
                                    
x
+
TestKicExistingNetwork (20.42s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 16:13:26.076354   40830 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 16:13:26.097860   40830 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 16:13:26.098045   40830 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 16:13:26.098070   40830 cli_runner.go:164] Run: docker network inspect existing-network
W0920 16:13:26.118063   40830 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 16:13:26.118079   40830 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 16:13:26.118096   40830 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 16:13:26.118281   40830 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 16:13:26.138705   40830 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c12ab0}
I0920 16:13:26.138740   40830 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
I0920 16:13:26.138821   40830 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W0920 16:13:26.158677   40830 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W0920 16:13:26.158717   40830 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W0920 16:13:26.158732   40830 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I0920 16:13:26.160343   40830 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I0920 16:13:26.160826   40830 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004a07e0}
I0920 16:13:26.160843   40830 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
I0920 16:13:26.160957   40830 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 16:13:26.227164   40830 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-181000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-181000 --network=existing-network: (18.381094226s)
helpers_test.go:175: Cleaning up "existing-network-181000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-181000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-181000: (1.847802557s)
I0920 16:13:46.476872   40830 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (20.42s)

                                                
                                    
x
+
TestKicCustomSubnet (20.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-010000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-010000 --subnet=192.168.60.0/24: (18.15177905s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-010000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-010000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-010000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-010000: (1.99936087s)
--- PASS: TestKicCustomSubnet (20.17s)

                                                
                                    
x
+
TestKicStaticIP (20.89s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-350000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-350000 --static-ip=192.168.200.200: (18.792791617s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-350000 ip
helpers_test.go:175: Cleaning up "static-ip-350000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-350000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-350000: (1.928098703s)
--- PASS: TestKicStaticIP (20.89s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (42.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-704000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-704000 --driver=docker : (18.645301169s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-716000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-716000 --driver=docker : (18.829020135s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-704000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-716000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-716000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-716000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-716000: (1.944245263s)
helpers_test.go:175: Cleaning up "first-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-704000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-704000: (1.956590678s)
--- PASS: TestMinikubeProfile (42.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-151000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-151000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (5.235416044s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-151000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-162000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
E0920 16:15:19.851093   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-162000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (5.347682065s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-162000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-151000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-151000 --alsologtostderr -v=5: (1.639986948s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-162000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-162000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-162000: (1.43844519s)
--- PASS: TestMountStart/serial/Stop (1.44s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.92s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-162000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-162000: (6.917552953s)
--- PASS: TestMountStart/serial/RestartStopped (7.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-162000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-352000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0920 16:15:39.897679   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-352000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m4.890546287s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (54.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-352000 -- rollout status deployment/busybox: (2.711368341s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 16:16:45.317122   40830 retry.go:31] will retry after 1.297192443s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 16:16:46.761563   40830 retry.go:31] will retry after 1.101622381s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 16:16:48.010696   40830 retry.go:31] will retry after 3.20519966s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 16:16:51.361913   40830 retry.go:31] will retry after 2.560796185s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 16:16:54.070802   40830 retry.go:31] will retry after 2.827808485s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 16:16:57.046329   40830 retry.go:31] will retry after 4.699336568s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 16:17:01.892461   40830 retry.go:31] will retry after 14.355573147s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0920 16:17:02.964691   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0920 16:17:16.397204   40830 retry.go:31] will retry after 18.801790638s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-gct6r -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-hgm8z -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-gct6r -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-hgm8z -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-gct6r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-hgm8z -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (54.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-gct6r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-gct6r -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-hgm8z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-352000 -- exec busybox-7dff88458-hgm8z -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (12.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-352000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-352000 -v 3 --alsologtostderr: (12.370751586s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (12.99s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-352000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp testdata/cp-test.txt multinode-352000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp multinode-352000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2118302733/001/cp-test_multinode-352000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp multinode-352000:/home/docker/cp-test.txt multinode-352000-m02:/home/docker/cp-test_multinode-352000_multinode-352000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m02 "sudo cat /home/docker/cp-test_multinode-352000_multinode-352000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp multinode-352000:/home/docker/cp-test.txt multinode-352000-m03:/home/docker/cp-test_multinode-352000_multinode-352000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m03 "sudo cat /home/docker/cp-test_multinode-352000_multinode-352000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp testdata/cp-test.txt multinode-352000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp multinode-352000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2118302733/001/cp-test_multinode-352000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp multinode-352000-m02:/home/docker/cp-test.txt multinode-352000:/home/docker/cp-test_multinode-352000-m02_multinode-352000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000 "sudo cat /home/docker/cp-test_multinode-352000-m02_multinode-352000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp multinode-352000-m02:/home/docker/cp-test.txt multinode-352000-m03:/home/docker/cp-test_multinode-352000-m02_multinode-352000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m03 "sudo cat /home/docker/cp-test_multinode-352000-m02_multinode-352000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp testdata/cp-test.txt multinode-352000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp multinode-352000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile2118302733/001/cp-test_multinode-352000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp multinode-352000-m03:/home/docker/cp-test.txt multinode-352000:/home/docker/cp-test_multinode-352000-m03_multinode-352000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000 "sudo cat /home/docker/cp-test_multinode-352000-m03_multinode-352000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 cp multinode-352000-m03:/home/docker/cp-test.txt multinode-352000-m02:/home/docker/cp-test_multinode-352000-m03_multinode-352000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 ssh -n multinode-352000-m02 "sudo cat /home/docker/cp-test_multinode-352000-m03_multinode-352000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.33s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-352000 node stop m03: (1.354038224s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-352000 status: exit status 7 (444.278957ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-352000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-352000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-352000 status --alsologtostderr: exit status 7 (448.451121ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-352000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-352000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 16:18:02.620687   46322 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:18:02.620956   46322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:18:02.620961   46322 out.go:358] Setting ErrFile to fd 2...
	I0920 16:18:02.620965   46322 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:18:02.621151   46322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
	I0920 16:18:02.621344   46322 out.go:352] Setting JSON to false
	I0920 16:18:02.621366   46322 mustload.go:65] Loading cluster: multinode-352000
	I0920 16:18:02.621409   46322 notify.go:220] Checking for updates...
	I0920 16:18:02.621722   46322 config.go:182] Loaded profile config "multinode-352000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:18:02.621743   46322 status.go:174] checking status of multinode-352000 ...
	I0920 16:18:02.622231   46322 cli_runner.go:164] Run: docker container inspect multinode-352000 --format={{.State.Status}}
	I0920 16:18:02.640868   46322 status.go:364] multinode-352000 host status = "Running" (err=<nil>)
	I0920 16:18:02.640897   46322 host.go:66] Checking if "multinode-352000" exists ...
	I0920 16:18:02.641176   46322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-352000
	I0920 16:18:02.659596   46322 host.go:66] Checking if "multinode-352000" exists ...
	I0920 16:18:02.659864   46322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 16:18:02.659942   46322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-352000
	I0920 16:18:02.678140   46322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63463 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/multinode-352000/id_rsa Username:docker}
	I0920 16:18:02.771544   46322 ssh_runner.go:195] Run: systemctl --version
	I0920 16:18:02.776102   46322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:18:02.786845   46322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-352000
	I0920 16:18:02.805984   46322 kubeconfig.go:125] found "multinode-352000" server: "https://127.0.0.1:63467"
	I0920 16:18:02.806015   46322 api_server.go:166] Checking apiserver status ...
	I0920 16:18:02.806063   46322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 16:18:02.816450   46322 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2264/cgroup
	W0920 16:18:02.825310   46322 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2264/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0920 16:18:02.825377   46322 ssh_runner.go:195] Run: ls
	I0920 16:18:02.829306   46322 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63467/healthz ...
	I0920 16:18:02.833714   46322 api_server.go:279] https://127.0.0.1:63467/healthz returned 200:
	ok
	I0920 16:18:02.833729   46322 status.go:456] multinode-352000 apiserver status = Running (err=<nil>)
	I0920 16:18:02.833738   46322 status.go:176] multinode-352000 status: &{Name:multinode-352000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 16:18:02.833750   46322 status.go:174] checking status of multinode-352000-m02 ...
	I0920 16:18:02.834026   46322 cli_runner.go:164] Run: docker container inspect multinode-352000-m02 --format={{.State.Status}}
	I0920 16:18:02.852060   46322 status.go:364] multinode-352000-m02 host status = "Running" (err=<nil>)
	I0920 16:18:02.852087   46322 host.go:66] Checking if "multinode-352000-m02" exists ...
	I0920 16:18:02.852359   46322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-352000-m02
	I0920 16:18:02.870595   46322 host.go:66] Checking if "multinode-352000-m02" exists ...
	I0920 16:18:02.870864   46322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 16:18:02.870926   46322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-352000-m02
	I0920 16:18:02.889182   46322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63507 SSHKeyPath:/Users/jenkins/minikube-integration/19672-40263/.minikube/machines/multinode-352000-m02/id_rsa Username:docker}
	I0920 16:18:02.980275   46322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 16:18:02.990519   46322 status.go:176] multinode-352000-m02 status: &{Name:multinode-352000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 16:18:02.990537   46322 status.go:174] checking status of multinode-352000-m03 ...
	I0920 16:18:02.990817   46322 cli_runner.go:164] Run: docker container inspect multinode-352000-m03 --format={{.State.Status}}
	I0920 16:18:03.009520   46322 status.go:364] multinode-352000-m03 host status = "Stopped" (err=<nil>)
	I0920 16:18:03.009544   46322 status.go:377] host is not running, skipping remaining checks
	I0920 16:18:03.009552   46322 status.go:176] multinode-352000-m03 status: &{Name:multinode-352000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-352000 node start m03 -v=7 --alsologtostderr: (9.302826003s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-352000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-352000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-352000: (22.537219052s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-352000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-352000 --wait=true -v=8 --alsologtostderr: (1m15.919139214s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-352000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-352000 node delete m03: (4.70251267s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-352000 stop: (21.3084317s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-352000 status: exit status 7 (96.39691ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-352000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-352000 status --alsologtostderr: exit status 7 (96.884879ms)

                                                
                                                
-- stdout --
	multinode-352000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-352000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 16:20:18.263413   46581 out.go:345] Setting OutFile to fd 1 ...
	I0920 16:20:18.264143   46581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:20:18.264155   46581 out.go:358] Setting ErrFile to fd 2...
	I0920 16:20:18.264161   46581 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 16:20:18.264612   46581 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19672-40263/.minikube/bin
	I0920 16:20:18.264815   46581 out.go:352] Setting JSON to false
	I0920 16:20:18.264836   46581 mustload.go:65] Loading cluster: multinode-352000
	I0920 16:20:18.264872   46581 notify.go:220] Checking for updates...
	I0920 16:20:18.265166   46581 config.go:182] Loaded profile config "multinode-352000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0920 16:20:18.265187   46581 status.go:174] checking status of multinode-352000 ...
	I0920 16:20:18.265645   46581 cli_runner.go:164] Run: docker container inspect multinode-352000 --format={{.State.Status}}
	I0920 16:20:18.284321   46581 status.go:364] multinode-352000 host status = "Stopped" (err=<nil>)
	I0920 16:20:18.284343   46581 status.go:377] host is not running, skipping remaining checks
	I0920 16:20:18.284350   46581 status.go:176] multinode-352000 status: &{Name:multinode-352000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 16:20:18.284375   46581 status.go:174] checking status of multinode-352000-m02 ...
	I0920 16:20:18.284657   46581 cli_runner.go:164] Run: docker container inspect multinode-352000-m02 --format={{.State.Status}}
	I0920 16:20:18.302530   46581 status.go:364] multinode-352000-m02 host status = "Stopped" (err=<nil>)
	I0920 16:20:18.302549   46581 status.go:377] host is not running, skipping remaining checks
	I0920 16:20:18.302553   46581 status.go:176] multinode-352000-m02 status: &{Name:multinode-352000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-352000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0920 16:20:19.857362   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:20:39.903845   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-352000 --wait=true -v=8 --alsologtostderr --driver=docker : (56.371167654s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-352000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.92s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-352000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-352000-m02 --driver=docker 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-352000-m02 --driver=docker : exit status 14 (447.085431ms)

                                                
                                                
-- stdout --
	* [multinode-352000-m02] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-352000-m02' is duplicated with machine name 'multinode-352000-m02' in profile 'multinode-352000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-352000-m03 --driver=docker 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-352000-m03 --driver=docker : (19.337388625s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-352000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-352000: exit status 80 (388.639876ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-352000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-352000-m03 already exists in multinode-352000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-352000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-352000-m03: (1.953798673s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.19s)

                                                
                                    
x
+
TestPreload (99.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-636000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0920 16:21:42.940180   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-636000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m4.334850073s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-636000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-636000 image pull gcr.io/k8s-minikube/busybox: (1.611394251s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-636000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-636000: (10.759526434s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-636000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-636000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (20.684876652s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-636000 image list
helpers_test.go:175: Cleaning up "test-preload-636000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-636000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-636000: (2.007281305s)
--- PASS: TestPreload (99.64s)

                                                
                                    
x
+
TestScheduledStopUnix (91.46s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-657000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-657000 --memory=2048 --driver=docker : (18.322192319s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-657000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-657000 -n scheduled-stop-657000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-657000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 16:23:39.461347   40830 retry.go:31] will retry after 115.04µs: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.461546   40830 retry.go:31] will retry after 163.345µs: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.461792   40830 retry.go:31] will retry after 266.499µs: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.462160   40830 retry.go:31] will retry after 355.919µs: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.462650   40830 retry.go:31] will retry after 730.522µs: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.463577   40830 retry.go:31] will retry after 714.007µs: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.464516   40830 retry.go:31] will retry after 1.171024ms: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.465881   40830 retry.go:31] will retry after 2.33568ms: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.468568   40830 retry.go:31] will retry after 3.718092ms: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.472761   40830 retry.go:31] will retry after 4.127927ms: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.477652   40830 retry.go:31] will retry after 6.798049ms: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.484756   40830 retry.go:31] will retry after 12.937186ms: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.498446   40830 retry.go:31] will retry after 17.142037ms: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.515708   40830 retry.go:31] will retry after 12.982523ms: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
I0920 16:23:39.529094   40830 retry.go:31] will retry after 37.870481ms: open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/scheduled-stop-657000/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-657000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-657000 -n scheduled-stop-657000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-657000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-657000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-657000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-657000: exit status 7 (82.484425ms)

                                                
                                                
-- stdout --
	scheduled-stop-657000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-657000 -n scheduled-stop-657000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-657000 -n scheduled-stop-657000: exit status 7 (76.783595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-657000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-657000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-657000: (1.721037424s)
--- PASS: TestScheduledStopUnix (91.46s)

                                                
                                    
x
+
TestSkaffold (111.01s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3064945029 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3064945029 version: (1.733041258s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-460000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-460000 --memory=2600 --driver=docker : (18.961460469s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3064945029 run --minikube-profile skaffold-460000 --kube-context skaffold-460000 --status-check=true --port-forward=false --interactive=false
E0920 16:25:19.856302   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:25:39.902942   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3064945029 run --minikube-profile skaffold-460000 --kube-context skaffold-460000 --status-check=true --port-forward=false --interactive=false: (1m15.50907664s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7d56486d6c-5hzcr" [0e4ae46a-4ca0-4351-872b-0f76e8a66a6e] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005395231s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-6546455cc-qhqqk" [a6fa8672-38d8-49ac-9a6d-f85d38caca2c] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00623103s
helpers_test.go:175: Cleaning up "skaffold-460000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-460000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-460000: (2.505149998s)
--- PASS: TestSkaffold (111.01s)

                                                
                                    
x
+
TestInsufficientStorage (7.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-586000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-586000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (5.70198428s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47dbc0fb-b725-49f1-857f-448930d68a1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-586000] minikube v1.34.0 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"94a398db-c945-46c6-998f-a2da7556a69a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19672"}}
	{"specversion":"1.0","id":"7b51a64e-48eb-4245-8490-17263b054d06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig"}}
	{"specversion":"1.0","id":"01954b99-051d-4bf5-923c-5b9916e93f89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"31fbcaf8-6cb0-4ea0-88e3-bb65fff82f8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b1f9da06-b569-40f3-a74c-2be95f8a055d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube"}}
	{"specversion":"1.0","id":"5d9b813f-569e-447f-9402-f843adbcdc9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"41ddac63-32bc-4706-97ea-7d226a72b985","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"18c63803-8877-440e-a384-3cff22f9ff0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"deb56808-21ce-4ea7-83c1-91d0e579cdfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"849b1743-68cc-4165-8e86-ab5033276f8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"699922d6-e072-4112-bf45-788c4f4d9dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-586000\" primary control-plane node in \"insufficient-storage-586000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"111a137f-8a39-458d-9e77-061ed75ed2fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"79246f26-e29a-45ae-9467-d4d1444398e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"51f7c518-b14b-491c-90ba-aadb0cbf16fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-586000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-586000 --output=json --layout=cluster: exit status 7 (252.089786ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-586000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-586000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 16:26:49.160307   47497 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-586000" does not appear in /Users/jenkins/minikube-integration/19672-40263/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-586000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-586000 --output=json --layout=cluster: exit status 7 (248.110307ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-586000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-586000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 16:26:49.408382   47503 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-586000" does not appear in /Users/jenkins/minikube-integration/19672-40263/kubeconfig
	E0920 16:26:49.418571   47503 status.go:258] unable to read event log: stat: stat /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/insufficient-storage-586000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-586000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-586000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-586000: (1.771545323s)
--- PASS: TestInsufficientStorage (7.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (93.19s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.927991802 start -p running-upgrade-822000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.927991802 start -p running-upgrade-822000 --memory=2200 --vm-driver=docker : (1m5.836788502s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-822000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-822000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (24.169236585s)
helpers_test.go:175: Cleaning up "running-upgrade-822000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-822000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-822000: (2.04675434s)
--- PASS: TestRunningBinaryUpgrade (93.19s)

                                                
                                    
x
+
TestKubernetesUpgrade (330.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-746000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker 
E0920 16:31:39.960224   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:50.202267   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-746000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker : (28.871489334s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-746000
E0920 16:32:10.685348   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-746000: (10.754896749s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-746000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-746000 status --format={{.Host}}: exit status 7 (80.034294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-746000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-746000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker : (4m26.718588467s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-746000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-746000 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-746000 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker : exit status 106 (596.269955ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-746000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-746000
	    minikube start -p kubernetes-upgrade-746000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7460002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-746000 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-746000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker 
E0920 16:36:57.416287   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-746000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker : (20.784226753s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-746000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-746000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-746000: (2.233862871s)
--- PASS: TestKubernetesUpgrade (330.10s)

                                                
                                    
x
+
TestMissingContainerUpgrade (84.66s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2032856057 start -p missing-upgrade-295000 --memory=2200 --driver=docker 
E0920 16:30:19.862932   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.2032856057 start -p missing-upgrade-295000 --memory=2200 --driver=docker : (24.777094394s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-295000
E0920 16:30:39.908573   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-295000: (10.175481282s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-295000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-darwin-amd64 start -p missing-upgrade-295000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0920 16:31:29.700872   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:29.708564   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:29.720075   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:29.742243   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:29.784480   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:29.866529   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:30.029188   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:30.350540   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:30.993038   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:31:32.274488   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-darwin-amd64 start -p missing-upgrade-295000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (46.167749269s)
helpers_test.go:175: Cleaning up "missing-upgrade-295000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-295000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-295000: (2.078937985s)
--- PASS: TestMissingContainerUpgrade (84.66s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.8s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19672
- KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4269438590/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4269438590/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4269438590/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4269438590/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (9.80s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (12.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin
- MINIKUBE_LOCATION=19672
- KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1936430063/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1936430063/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1936430063/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1936430063/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (12.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (59.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1078151178 start -p stopped-upgrade-343000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1078151178 start -p stopped-upgrade-343000 --memory=2200 --vm-driver=docker : (23.71033643s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1078151178 -p stopped-upgrade-343000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1078151178 -p stopped-upgrade-343000 stop: (11.989290323s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-343000 --memory=2200 --alsologtostderr -v=1 --driver=docker 
E0920 16:32:51.647671   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-343000 --memory=2200 --alsologtostderr -v=1 --driver=docker : (24.021030742s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (59.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-343000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-343000: (3.115694082s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.12s)

                                                
                                    
x
+
TestPause/serial/Start (33.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-464000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-464000 --memory=2048 --install-addons=false --wait=all --driver=docker : (33.829980738s)
--- PASS: TestPause/serial/Start (33.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (32.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-464000 --alsologtostderr -v=1 --driver=docker 
E0920 16:33:42.979032   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-464000 --alsologtostderr -v=1 --driver=docker : (32.329160789s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (32.34s)

                                                
                                    
x
+
TestPause/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-464000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-464000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-464000 --output=json --layout=cluster: exit status 2 (263.14363ms)

                                                
                                                
-- stdout --
	{"Name":"pause-464000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-464000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-464000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.57s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-464000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.57s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-464000 --alsologtostderr -v=5
E0920 16:34:13.571062   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-464000 --alsologtostderr -v=5: (2.028145011s)
--- PASS: TestPause/serial/DeletePaused (2.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.11s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (15.041025449s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-464000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-464000: exit status 1 (21.573122ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-464000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-348000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-348000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (403.380686ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-348000] minikube v1.34.0 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19672
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19672-40263/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19672-40263/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (18.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-348000 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-348000 --driver=docker : (18.104851307s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-348000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (18.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-348000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-348000 --no-kubernetes --driver=docker : (5.258856515s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-348000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-348000 status -o json: exit status 2 (250.681496ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-348000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-348000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-348000: (1.769402415s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-348000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-348000 --no-kubernetes --driver=docker : (5.684764626s)
--- PASS: TestNoKubernetes/serial/Start (5.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-348000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-348000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (235.230992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-348000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-348000: (1.425674666s)
--- PASS: TestNoKubernetes/serial/Stop (1.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-348000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-348000 --driver=docker : (7.249871878s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-348000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-348000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (228.736047ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (33.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
E0920 16:35:19.867715   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:35:39.914812   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (33.868625545s)
--- PASS: TestNetworkPlugins/group/auto/Start (33.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-538000 "pgrep -a kubelet"
I0920 16:35:47.191592   40830 config.go:182] Loaded profile config "auto-538000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-538000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sxvj2" [efc5c531-3000-4e9e-b90e-72eea7dd136f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sxvj2" [efc5c531-3000-4e9e-b90e-72eea7dd136f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003868179s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-538000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (58.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
E0920 16:36:29.706646   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (58.656789623s)
--- PASS: TestNetworkPlugins/group/calico/Start (58.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (40.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (40.7586592s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (40.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-58b88" [0fe58130-afc9-4675-a9d6-0cc50303802a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003754587s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-538000 "pgrep -a kubelet"
I0920 16:37:21.160327   40830 config.go:182] Loaded profile config "calico-538000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-538000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-94hgs" [5d3a2cfc-0a7b-4001-a403-cd4979771958] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-94hgs" [5d3a2cfc-0a7b-4001-a403-cd4979771958] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.00325752s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-538000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-538000 "pgrep -a kubelet"
I0920 16:37:49.021993   40830 config.go:182] Loaded profile config "custom-flannel-538000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-538000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t2vdl" [4e25f496-9d45-4380-bd02-d36d2f798129] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-t2vdl" [4e25f496-9d45-4380-bd02-d36d2f798129] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.002827414s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (30.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (30.175478314s)
--- PASS: TestNetworkPlugins/group/false/Start (30.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-538000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (53.982535559s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-538000 "pgrep -a kubelet"
E0920 16:38:22.953341   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
I0920 16:38:23.051351   40830 config.go:182] Loaded profile config "false-538000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-538000 replace --force -f testdata/netcat-deployment.yaml
I0920 16:38:23.289213   40830 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I0920 16:38:23.293134   40830 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gmgln" [d88593eb-9753-4aef-aa23-c9d21249ace4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gmgln" [d88593eb-9753-4aef-aa23-c9d21249ace4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.006251846s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (16.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-538000 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context false-538000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.145499552s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 16:38:48.455234   40830 retry.go:31] will retry after 1.128522666s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context false-538000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (16.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (31.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (31.140037719s)
--- PASS: TestNetworkPlugins/group/flannel/Start (31.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ln9t7" [68f3e169-fd23-48ae-af2d-76d6c1c0173d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003225607s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-538000 "pgrep -a kubelet"
I0920 16:39:19.650149   40830 config.go:182] Loaded profile config "kindnet-538000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-538000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-krbd9" [3e2e4acf-46d6-41d9-8b1a-e4584f2cdd41] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-krbd9" [3e2e4acf-46d6-41d9-8b1a-e4584f2cdd41] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.006506315s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-538000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (7.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mbh95" [14db1e9e-9841-47a3-af8e-d7f1cc4945c5] Pending / Ready:ContainersNotReady (containers with unready status: [kube-flannel]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-flannel])
helpers_test.go:344: "kube-flannel-ds-mbh95" [14db1e9e-9841-47a3-af8e-d7f1cc4945c5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 7.005099805s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (7.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-538000 "pgrep -a kubelet"
I0920 16:39:46.124944   40830 config.go:182] Loaded profile config "flannel-538000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-538000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-25dfp" [e49d2e0b-4c20-43ac-87fc-d6557551928d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-25dfp" [e49d2e0b-4c20-43ac-87fc-d6557551928d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00318744s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (56.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (56.635704748s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (56.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-538000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0920 16:40:19.874109   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:39.921508   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (1m2.330196922s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-538000 "pgrep -a kubelet"
I0920 16:40:45.542346   40830 config.go:182] Loaded profile config "enable-default-cni-538000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-538000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d445b" [cfb6b338-e997-44be-9aa9-6bc29788b478] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 16:40:47.389611   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:47.395946   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:47.407642   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:47.429132   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:47.470531   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:47.553589   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:47.714888   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:48.036894   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:48.678264   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:40:49.960573   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-d445b" [cfb6b338-e997-44be-9aa9-6bc29788b478] Running
E0920 16:40:52.523579   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.005931036s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-538000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (32.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-538000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (32.397501538s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (32.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-538000 "pgrep -a kubelet"
I0920 16:41:19.239070   40830 config.go:182] Loaded profile config "bridge-538000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-538000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nm9nc" [c3e7c8a8-4286-4a94-9c37-f0ff4b161551] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nm9nc" [c3e7c8a8-4286-4a94-9c37-f0ff4b161551] Running
E0920 16:41:28.373138   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003427897s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-538000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0920 16:41:29.712176   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-538000 "pgrep -a kubelet"
I0920 16:41:47.303379   40830 config.go:182] Loaded profile config "kubenet-538000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-538000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f8ncn" [1ecc9875-a928-4c51-b38f-af56355f3c8f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f8ncn" [1ecc9875-a928-4c51-b38f-af56355f3c8f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004140711s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (145.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-869000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-869000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.20.0: (2m25.315222746s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (145.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (21.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-538000 exec deployment/netcat -- nslookup kubernetes.default
E0920 16:42:09.335914   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-538000 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.185722797s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0920 16:42:13.670378   40830 retry.go:31] will retry after 905.318856ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context kubenet-538000 exec deployment/netcat -- nslookup kubernetes.default
E0920 16:42:14.889245   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:14.895669   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:14.907667   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:14.929277   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:14.971215   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:15.052829   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:15.214143   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:15.535806   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:16.177352   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:17.460338   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Done: kubectl --context kubenet-538000 exec deployment/netcat -- nslookup kubernetes.default: (5.131277421s)
--- PASS: TestNetworkPlugins/group/kubenet/DNS (21.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-538000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)
E0920 16:54:13.313100   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:54:13.397752   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (48.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-469000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.31.1
E0920 16:42:49.197449   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:49.203982   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:49.215397   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:49.237161   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:49.278358   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:49.359886   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:49.521146   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:49.843255   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:50.486351   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:51.768157   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:54.329579   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:55.867784   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:42:59.451734   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:09.694682   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:23.283790   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:23.290767   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:23.302365   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:23.324271   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:23.365555   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:23.448604   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:23.610399   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:23.932237   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:24.574365   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:25.856061   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-469000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.31.1: (48.973041503s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (48.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-469000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7cf43df2-6f93-4bb3-a4a5-6055093f04d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 16:43:28.417701   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [7cf43df2-6f93-4bb3-a4a5-6055093f04d8] Running
E0920 16:43:30.177097   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:31.259918   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:43:33.539448   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003493746s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-469000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-469000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0920 16:43:36.829994   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-469000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-469000 --alsologtostderr -v=3
E0920 16:43:43.781466   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-469000 --alsologtostderr -v=3: (10.888060296s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-469000 -n no-preload-469000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-469000 -n no-preload-469000: exit status 7 (76.853258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-469000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (275.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-469000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.31.1
E0920 16:44:04.264011   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:11.139532   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-469000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.31.1: (4m35.252563207s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-469000 -n no-preload-469000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (275.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-869000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4a151bc8-1fc3-4a10-bd35-67d04b8c9a07] Pending
E0920 16:44:13.395104   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:13.401886   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:13.413876   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:13.435458   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:13.476992   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:13.558771   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:13.720285   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:14.043281   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [4a151bc8-1fc3-4a10-bd35-67d04b8c9a07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 16:44:14.685624   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:15.967327   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [4a151bc8-1fc3-4a10-bd35-67d04b8c9a07] Running
E0920 16:44:18.528647   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005203141s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-869000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-869000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-869000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-869000 --alsologtostderr -v=3
E0920 16:44:23.650124   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-869000 --alsologtostderr -v=3: (10.875621552s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-869000 -n old-k8s-version-869000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-869000 -n old-k8s-version-869000: exit status 7 (76.787252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-869000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (141.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-869000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.20.0
E0920 16:44:33.890802   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:38.853620   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:38.860653   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:38.872388   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:38.894574   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:38.936132   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:39.017855   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:39.179701   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:39.501521   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:40.142832   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:41.423987   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:43.985331   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:45.222100   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:49.106551   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:54.368083   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:58.748374   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:44:59.347687   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:19.829486   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:19.871361   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:33.055361   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:35.329268   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:39.919752   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:45.696104   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:45.703501   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:45.715207   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:45.736526   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:45.778546   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:45.860569   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:46.022204   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:46.345754   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:46.987455   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:47.388411   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:48.269827   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:50.831830   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:45:55.954306   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:00.791560   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:06.196501   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:07.141888   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:15.097214   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:19.395108   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:19.402601   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:19.415426   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:19.436957   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:19.478954   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:19.560886   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:19.723113   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:20.044988   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:20.686839   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:21.968147   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:24.531743   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:26.680283   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:29.653871   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:29.710341   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:39.896368   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:47.470598   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:47.476849   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:47.488525   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:47.510420   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:47.552527   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:47.634341   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:47.796601   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:48.118678   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:48.760926   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:50.043653   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:52.605393   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-869000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.20.0: (2m20.796935441s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-869000 -n old-k8s-version-869000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (141.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nj7ph" [3e3d8db7-a1f8-415a-8b48-f76793ed4029] Running
E0920 16:46:57.252898   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:46:57.727553   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:47:00.378808   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005528655s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nj7ph" [3e3d8db7-a1f8-415a-8b48-f76793ed4029] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004796972s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-869000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-869000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-869000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-869000 -n old-k8s-version-869000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-869000 -n old-k8s-version-869000: exit status 2 (268.622616ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-869000 -n old-k8s-version-869000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-869000 -n old-k8s-version-869000: exit status 2 (265.052396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-869000 --alsologtostderr -v=1
E0920 16:47:07.642561   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-869000 -n old-k8s-version-869000
E0920 16:47:07.969347   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-869000 -n old-k8s-version-869000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (31.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-582000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.31.1
E0920 16:47:14.887255   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:47:22.715015   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:47:28.451973   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:47:41.341119   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-582000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.31.1: (31.738636488s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (31.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-582000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [13443c5f-c315-487e-b9a3-277b7163f26a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 16:47:42.592589   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [13443c5f-c315-487e-b9a3-277b7163f26a] Running
E0920 16:47:49.195699   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003251296s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-582000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-582000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-582000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-582000 --alsologtostderr -v=3
E0920 16:47:52.782972   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-582000 --alsologtostderr -v=3: (10.768739357s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-582000 -n embed-certs-582000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-582000 -n embed-certs-582000: exit status 7 (77.826743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-582000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (298.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-582000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.31.1
E0920 16:48:09.414516   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:48:16.899726   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:48:23.281931   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-582000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.31.1: (4m58.05879184s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-582000 -n embed-certs-582000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (298.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ctcv7" [2575a519-b07d-4fc4-946f-cd5f8b2be3c4] Running
E0920 16:48:29.566993   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005812453s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ctcv7" [2575a519-b07d-4fc4-946f-cd5f8b2be3c4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004978037s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-469000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-469000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-469000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-469000 -n no-preload-469000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-469000 -n no-preload-469000: exit status 2 (263.451325ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-469000 -n no-preload-469000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-469000 -n no-preload-469000: exit status 2 (268.631172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-469000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-469000 -n no-preload-469000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-469000 -n no-preload-469000
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-229000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.31.1
E0920 16:48:50.987128   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:03.266246   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:13.307005   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:13.313878   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:13.326242   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:13.347972   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:13.391513   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:13.392754   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:13.474680   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:13.636230   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:13.958237   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:14.599779   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:15.883333   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:18.445028   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:23.568340   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:31.337872   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:49:33.810330   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-229000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.31.1: (56.371322879s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (56.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-229000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d225ddae-0232-4c4e-974f-85b378166b8e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0920 16:49:38.854237   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [d225ddae-0232-4c4e-974f-85b378166b8e] Running
E0920 16:49:41.099309   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kindnet-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004607047s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-229000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-229000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-229000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-229000 --alsologtostderr -v=3
E0920 16:49:54.292549   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-229000 --alsologtostderr -v=3: (10.766518585s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-229000 -n default-k8s-diff-port-229000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-229000 -n default-k8s-diff-port-229000: exit status 7 (78.044896ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-229000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-229000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.31.1
E0920 16:50:06.559740   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:50:19.876652   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/functional-490000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:50:22.992622   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:50:35.255034   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:50:39.923521   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/addons-918000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:50:45.701301   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:50:47.393966   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/auto-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:51:13.411733   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/enable-default-cni-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:51:19.401615   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:51:29.715145   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/skaffold-460000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:51:47.110708   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/bridge-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:51:47.475683   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:51:57.178417   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/old-k8s-version-869000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:52:14.893510   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/calico-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:52:15.182232   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/kubenet-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:52:49.199623   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/custom-flannel-538000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-229000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.31.1: (4m23.041786087s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-229000 -n default-k8s-diff-port-229000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c87b8" [a7582fbf-d3bd-4552-9fa0-3f51679c8fda] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003705228s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c87b8" [a7582fbf-d3bd-4552-9fa0-3f51679c8fda] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005631238s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-582000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-582000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-582000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-582000 -n embed-certs-582000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-582000 -n embed-certs-582000: exit status 2 (264.425888ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-582000 -n embed-certs-582000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-582000 -n embed-certs-582000: exit status 2 (269.204317ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-582000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-582000 -n embed-certs-582000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-582000 -n embed-certs-582000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (22.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-381000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.31.1
E0920 16:53:23.287247   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/false-538000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:27.087440   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:27.093666   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:27.106338   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:27.128986   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:27.171149   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:27.253162   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:27.415482   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:27.738743   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:28.380231   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:29.661978   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:32.223680   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
E0920 16:53:37.345225   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-381000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.31.1: (22.827666414s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (22.83s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-381000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-381000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025838702s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-381000 --alsologtostderr -v=3
E0920 16:53:47.588819   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-381000 --alsologtostderr -v=3: (9.602205339s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-381000 -n newest-cni-381000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-381000 -n newest-cni-381000: exit status 7 (77.13238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-381000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-381000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-381000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.31.1: (14.822155936s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-381000 -n newest-cni-381000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-381000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-381000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-381000 -n newest-cni-381000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-381000 -n newest-cni-381000: exit status 2 (266.147653ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-381000 -n newest-cni-381000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-381000 -n newest-cni-381000: exit status 2 (266.948911ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-381000 --alsologtostderr -v=1
E0920 16:54:08.071927   40830 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19672-40263/.minikube/profiles/no-preload-469000/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-381000 -n newest-cni-381000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-381000 -n newest-cni-381000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-v9xpr" [36011607-0bf3-4bf6-bedd-1622efb9011f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005254752s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-v9xpr" [36011607-0bf3-4bf6-bedd-1622efb9011f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00465528s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-229000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-229000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-229000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-229000 -n default-k8s-diff-port-229000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-229000 -n default-k8s-diff-port-229000: exit status 2 (266.985971ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-229000 -n default-k8s-diff-port-229000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-229000 -n default-k8s-diff-port-229000: exit status 2 (265.782919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-229000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-229000 -n default-k8s-diff-port-229000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-229000 -n default-k8s-diff-port-229000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.34s)

                                                
                                    

Test skip (18/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-918000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-918000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-918000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5a6c509d-2ac0-4b69-8805-1b06810305a0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5a6c509d-2ac0-4b69-8805-1b06810305a0] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003902666s
I0920 15:56:14.825049   40830 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p addons-918000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:280: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.68s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-490000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-490000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-7gcfp" [f19fa4dc-b836-4c4e-9e4b-f62b9f1145b2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-7gcfp" [f19fa4dc-b836-4c4e-9e4b-f62b9f1145b2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005201213s
functional_test.go:1646: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-538000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-538000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-538000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-538000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-538000"

                                                
                                                
----------------------- debugLogs end: cilium-538000 [took: 6.223015181s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-538000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-538000
--- SKIP: TestNetworkPlugins/group/cilium (6.44s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-788000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-788000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard