Test Report: Docker_macOS 19529

                    
                      d7f9f66bdcb95e27f1005d5ce9d414c92a72aaf8:2024-08-28:35983
                    
                

Test fail (2/175)

Order failed test Duration
33 TestAddons/parallel/Registry 74.44
223 TestMountStart/serial/StartWithMountFirst 7201.807
x
+
TestAddons/parallel/Registry (74.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.500499ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-hqg5k" [9a0b8457-455b-4784-9261-d2e66876448f] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004684567s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mbq7t" [03026f32-0ed6-4356-af59-209ab357edb5] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007525995s
addons_test.go:342: (dbg) Run:  kubectl --context addons-376000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-376000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-376000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.06690647s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-376000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: Unable to complete rest of the test due to connectivity assumptions
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-376000
helpers_test.go:235: (dbg) docker inspect addons-376000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9640ab5fc76e3bcc3df11d3240f275e0e061dd8a812a1a6f1195a066317797bf",
	        "Created": "2024-08-28T16:51:32.659451249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1182,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-28T16:51:32.905828949Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:33319d96a2f78fe466b6d8cbd88671515fca2b1eded3ce0b5f6d545b670a78ac",
	        "ResolvConfPath": "/var/lib/docker/containers/9640ab5fc76e3bcc3df11d3240f275e0e061dd8a812a1a6f1195a066317797bf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9640ab5fc76e3bcc3df11d3240f275e0e061dd8a812a1a6f1195a066317797bf/hostname",
	        "HostsPath": "/var/lib/docker/containers/9640ab5fc76e3bcc3df11d3240f275e0e061dd8a812a1a6f1195a066317797bf/hosts",
	        "LogPath": "/var/lib/docker/containers/9640ab5fc76e3bcc3df11d3240f275e0e061dd8a812a1a6f1195a066317797bf/9640ab5fc76e3bcc3df11d3240f275e0e061dd8a812a1a6f1195a066317797bf-json.log",
	        "Name": "/addons-376000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-376000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-376000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4194304000,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7a9c11fe42f4e4b0c4fec5eaa5ddb1e78921fadac89b49e5b77c5851a9b506ec-init/diff:/var/lib/docker/overlay2/988c931c0486723613a47d9c36f7522d6b8a9b4a5854c76de3506dcd4ab5f7d3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a9c11fe42f4e4b0c4fec5eaa5ddb1e78921fadac89b49e5b77c5851a9b506ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a9c11fe42f4e4b0c4fec5eaa5ddb1e78921fadac89b49e5b77c5851a9b506ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a9c11fe42f4e4b0c4fec5eaa5ddb1e78921fadac89b49e5b77c5851a9b506ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-376000",
	                "Source": "/var/lib/docker/volumes/addons-376000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-376000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-376000",
	                "name.minikube.sigs.k8s.io": "addons-376000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0affd371740655e7cd98745969921ccfccbb10445538c88f20013941fb81da3e",
	            "SandboxKey": "/var/run/docker/netns/0affd3717406",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49353"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49354"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49350"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49351"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "49352"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-376000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "226895cf688eb4d43531357a33915869ee395057afbe8d687bb6c942ba1a9268",
	                    "EndpointID": "3bf4839da196adf04d5e1d0cee4b276c1eb868cb52495feea078aff13f2d8aa3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-376000",
	                        "9640ab5fc76e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p addons-376000 -n addons-376000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p addons-376000 logs -n 25: (2.542061469s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-488000   | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | -p download-only-488000              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-488000              | download-only-488000   | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| start   | -o=json --download-only              | download-only-032000   | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | -p download-only-032000              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-032000              | download-only-032000   | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-488000              | download-only-488000   | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-032000              | download-only-032000   | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| start   | --download-only -p                   | download-docker-183000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | download-docker-183000               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	| delete  | -p download-docker-183000            | download-docker-183000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| start   | --download-only -p                   | binary-mirror-590000   | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | binary-mirror-590000                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:49339               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-590000              | binary-mirror-590000   | jenkins | v1.33.1 | 28 Aug 24 09:51 PDT | 28 Aug 24 09:51 PDT |
	| addons  | disable dashboard -p                 | addons-376000          | jenkins | v1.33.1 | 28 Aug 24 09:51 PDT |                     |
	|         | addons-376000                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-376000          | jenkins | v1.33.1 | 28 Aug 24 09:51 PDT |                     |
	|         | addons-376000                        |                        |         |         |                     |                     |
	| start   | -p addons-376000 --wait=true         | addons-376000          | jenkins | v1.33.1 | 28 Aug 24 09:51 PDT | 28 Aug 24 09:54 PDT |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker  --addons=ingress    |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-376000 addons disable         | addons-376000          | jenkins | v1.33.1 | 28 Aug 24 09:55 PDT | 28 Aug 24 09:55 PDT |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-376000 addons                 | addons-376000          | jenkins | v1.33.1 | 28 Aug 24 10:04 PDT | 28 Aug 24 10:04 PDT |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-376000 addons                 | addons-376000          | jenkins | v1.33.1 | 28 Aug 24 10:04 PDT | 28 Aug 24 10:04 PDT |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-376000 addons disable         | addons-376000          | jenkins | v1.33.1 | 28 Aug 24 10:04 PDT | 28 Aug 24 10:04 PDT |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 09:51:00
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 09:51:00.847400    2196 out.go:345] Setting OutFile to fd 1 ...
	I0828 09:51:00.848203    2196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:51:00.848211    2196 out.go:358] Setting ErrFile to fd 2...
	I0828 09:51:00.848217    2196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:51:00.848831    2196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
	I0828 09:51:00.850596    2196 out.go:352] Setting JSON to false
	I0828 09:51:00.874319    2196 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1230,"bootTime":1724862630,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0828 09:51:00.874417    2196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 09:51:00.895236    2196 out.go:177] * [addons-376000] minikube v1.33.1 on Darwin 14.6.1
	I0828 09:51:00.936477    2196 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 09:51:00.936514    2196 notify.go:220] Checking for updates...
	I0828 09:51:00.978426    2196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1451/kubeconfig
	I0828 09:51:01.001556    2196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0828 09:51:01.022321    2196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 09:51:01.043475    2196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1451/.minikube
	I0828 09:51:01.064652    2196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 09:51:01.085662    2196 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 09:51:01.109385    2196 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0828 09:51:01.109581    2196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 09:51:01.191545    2196 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:66 SystemTime:2024-08-28 16:51:01.183179932 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768061440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0828 09:51:01.212539    2196 out.go:177] * Using the docker driver based on user configuration
	I0828 09:51:01.270366    2196 start.go:297] selected driver: docker
	I0828 09:51:01.270391    2196 start.go:901] validating driver "docker" against <nil>
	I0828 09:51:01.270406    2196 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 09:51:01.274835    2196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 09:51:01.360303    2196 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:66 SystemTime:2024-08-28 16:51:01.350099433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768061440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0828 09:51:01.360505    2196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 09:51:01.360707    2196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 09:51:01.383877    2196 out.go:177] * Using Docker Desktop driver with root privileges
	I0828 09:51:01.405480    2196 cni.go:84] Creating CNI manager for ""
	I0828 09:51:01.405513    2196 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 09:51:01.405524    2196 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 09:51:01.405612    2196 start.go:340] cluster config:
	{Name:addons-376000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 09:51:01.427723    2196 out.go:177] * Starting "addons-376000" primary control-plane node in "addons-376000" cluster
	I0828 09:51:01.471338    2196 cache.go:121] Beginning downloading kic base image for docker with docker
	I0828 09:51:01.492421    2196 out.go:177] * Pulling base image v0.0.44-1724775115-19521 ...
	I0828 09:51:01.534308    2196 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:51:01.534339    2196 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 09:51:01.534381    2196 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0828 09:51:01.534402    2196 cache.go:56] Caching tarball of preloaded images
	I0828 09:51:01.534624    2196 preload.go:172] Found /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0828 09:51:01.534642    2196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 09:51:01.536173    2196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/config.json ...
	I0828 09:51:01.536301    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/config.json: {Name:mkfb17c5b868a99c39949ed855f78aff344ab792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:01.553342    2196 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 09:51:01.553515    2196 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 09:51:01.553533    2196 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0828 09:51:01.553539    2196 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0828 09:51:01.553546    2196 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0828 09:51:01.553550    2196 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from local cache
	I0828 09:51:24.258466    2196 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce from cached tarball
	I0828 09:51:24.258513    2196 cache.go:194] Successfully downloaded all kic artifacts
	I0828 09:51:24.258574    2196 start.go:360] acquireMachinesLock for addons-376000: {Name:mk41e61c4514500584df30d89852f19998ca6220 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0828 09:51:24.258991    2196 start.go:364] duration metric: took 403.948µs to acquireMachinesLock for "addons-376000"
	I0828 09:51:24.259024    2196 start.go:93] Provisioning new machine with config: &{Name:addons-376000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-376000 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 09:51:24.259075    2196 start.go:125] createHost starting for "" (driver="docker")
	I0828 09:51:24.302021    2196 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0828 09:51:24.302289    2196 start.go:159] libmachine.API.Create for "addons-376000" (driver="docker")
	I0828 09:51:24.302315    2196 client.go:168] LocalClient.Create starting
	I0828 09:51:24.302966    2196 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/ca.pem
	I0828 09:51:24.352043    2196 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/cert.pem
	I0828 09:51:24.538759    2196 cli_runner.go:164] Run: docker network inspect addons-376000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0828 09:51:24.557829    2196 cli_runner.go:211] docker network inspect addons-376000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0828 09:51:24.557936    2196 network_create.go:284] running [docker network inspect addons-376000] to gather additional debugging logs...
	I0828 09:51:24.557954    2196 cli_runner.go:164] Run: docker network inspect addons-376000
	W0828 09:51:24.575661    2196 cli_runner.go:211] docker network inspect addons-376000 returned with exit code 1
	I0828 09:51:24.575690    2196 network_create.go:287] error running [docker network inspect addons-376000]: docker network inspect addons-376000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-376000 not found
	I0828 09:51:24.575701    2196 network_create.go:289] output of [docker network inspect addons-376000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-376000 not found
	
	** /stderr **
	I0828 09:51:24.575843    2196 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0828 09:51:24.594678    2196 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015531e0}
	I0828 09:51:24.594716    2196 network_create.go:124] attempt to create docker network addons-376000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0828 09:51:24.594785    2196 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-376000 addons-376000
	I0828 09:51:24.661388    2196 network_create.go:108] docker network addons-376000 192.168.49.0/24 created
	I0828 09:51:24.661441    2196 kic.go:121] calculated static IP "192.168.49.2" for the "addons-376000" container
	I0828 09:51:24.661615    2196 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0828 09:51:24.680709    2196 cli_runner.go:164] Run: docker volume create addons-376000 --label name.minikube.sigs.k8s.io=addons-376000 --label created_by.minikube.sigs.k8s.io=true
	I0828 09:51:24.700722    2196 oci.go:103] Successfully created a docker volume addons-376000
	I0828 09:51:24.700857    2196 cli_runner.go:164] Run: docker run --rm --name addons-376000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-376000 --entrypoint /usr/bin/test -v addons-376000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib
	I0828 09:51:26.877544    2196 cli_runner.go:217] Completed: docker run --rm --name addons-376000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-376000 --entrypoint /usr/bin/test -v addons-376000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -d /var/lib: (2.176654979s)
	I0828 09:51:26.877595    2196 oci.go:107] Successfully prepared a docker volume addons-376000
	I0828 09:51:26.877630    2196 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:51:26.877649    2196 kic.go:194] Starting extracting preloaded images to volume ...
	I0828 09:51:26.877776    2196 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-376000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir
	I0828 09:51:32.532611    2196 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-376000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce -I lz4 -xf /preloaded.tar -C /extractDir: (5.654955596s)
	I0828 09:51:32.532637    2196 kic.go:203] duration metric: took 5.655159858s to extract preloaded images to volume ...
	I0828 09:51:32.532750    2196 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0828 09:51:32.638608    2196 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-376000 --name addons-376000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-376000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-376000 --network addons-376000 --ip 192.168.49.2 --volume addons-376000:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce
	I0828 09:51:33.126689    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Running}}
	I0828 09:51:33.153128    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:33.179698    2196 cli_runner.go:164] Run: docker exec addons-376000 stat /var/lib/dpkg/alternatives/iptables
	I0828 09:51:33.267463    2196 oci.go:144] the created container "addons-376000" has a running status.
	I0828 09:51:33.267496    2196 kic.go:225] Creating ssh key for kic: /Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa...
	I0828 09:51:33.762480    2196 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0828 09:51:33.801564    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:33.828309    2196 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0828 09:51:33.828343    2196 kic_runner.go:114] Args: [docker exec --privileged addons-376000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0828 09:51:33.907188    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:33.931281    2196 machine.go:93] provisionDockerMachine start ...
	I0828 09:51:33.931583    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:33.959003    2196 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:33.959294    2196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd41ea0] 0xbd44c00 <nil>  [] 0s} 127.0.0.1 49353 <nil> <nil>}
	I0828 09:51:33.959305    2196 main.go:141] libmachine: About to run SSH command:
	hostname
	I0828 09:51:34.100442    2196 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-376000
	
	I0828 09:51:34.100464    2196 ubuntu.go:169] provisioning hostname "addons-376000"
	I0828 09:51:34.100537    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:34.119216    2196 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:34.119406    2196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd41ea0] 0xbd44c00 <nil>  [] 0s} 127.0.0.1 49353 <nil> <nil>}
	I0828 09:51:34.119421    2196 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-376000 && echo "addons-376000" | sudo tee /etc/hostname
	I0828 09:51:34.258521    2196 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-376000
	
	I0828 09:51:34.258646    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:34.280151    2196 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:34.280366    2196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd41ea0] 0xbd44c00 <nil>  [] 0s} 127.0.0.1 49353 <nil> <nil>}
	I0828 09:51:34.280399    2196 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-376000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-376000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-376000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0828 09:51:34.414097    2196 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0828 09:51:34.414135    2196 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/19529-1451/.minikube CaCertPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19529-1451/.minikube}
	I0828 09:51:34.414159    2196 ubuntu.go:177] setting up certificates
	I0828 09:51:34.414166    2196 provision.go:84] configureAuth start
	I0828 09:51:34.414352    2196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-376000
	I0828 09:51:34.433971    2196 provision.go:143] copyHostCerts
	I0828 09:51:34.434077    2196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19529-1451/.minikube/ca.pem (1078 bytes)
	I0828 09:51:34.434358    2196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19529-1451/.minikube/cert.pem (1123 bytes)
	I0828 09:51:34.434562    2196 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19529-1451/.minikube/key.pem (1675 bytes)
	I0828 09:51:34.434713    2196 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19529-1451/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19529-1451/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19529-1451/.minikube/certs/ca-key.pem org=jenkins.addons-376000 san=[127.0.0.1 192.168.49.2 addons-376000 localhost minikube]
	I0828 09:51:34.688800    2196 provision.go:177] copyRemoteCerts
	I0828 09:51:34.688941    2196 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0828 09:51:34.689005    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:34.708546    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:34.800632    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0828 09:51:34.822333    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0828 09:51:34.845844    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0828 09:51:34.867152    2196 provision.go:87] duration metric: took 452.984778ms to configureAuth
	I0828 09:51:34.867195    2196 ubuntu.go:193] setting minikube options for container-runtime
	I0828 09:51:34.867348    2196 config.go:182] Loaded profile config "addons-376000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 09:51:34.867422    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:34.886574    2196 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:34.886757    2196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd41ea0] 0xbd44c00 <nil>  [] 0s} 127.0.0.1 49353 <nil> <nil>}
	I0828 09:51:34.886774    2196 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0828 09:51:35.016922    2196 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0828 09:51:35.016957    2196 ubuntu.go:71] root file system type: overlay
	I0828 09:51:35.017100    2196 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0828 09:51:35.017186    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:35.038064    2196 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:35.038259    2196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd41ea0] 0xbd44c00 <nil>  [] 0s} 127.0.0.1 49353 <nil> <nil>}
	I0828 09:51:35.038313    2196 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0828 09:51:35.178185    2196 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0828 09:51:35.178273    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:35.197334    2196 main.go:141] libmachine: Using SSH client type: native
	I0828 09:51:35.197567    2196 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xbd41ea0] 0xbd44c00 <nil>  [] 0s} 127.0.0.1 49353 <nil> <nil>}
	I0828 09:51:35.197582    2196 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0828 09:51:36.185449    2196 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-08-12 11:48:57.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-08-28 16:51:35.176823963 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0828 09:51:36.185474    2196 machine.go:96] duration metric: took 2.254224275s to provisionDockerMachine
	I0828 09:51:36.185504    2196 client.go:171] duration metric: took 11.883537948s to LocalClient.Create
	I0828 09:51:36.185526    2196 start.go:167] duration metric: took 11.883598031s to libmachine.API.Create "addons-376000"
	I0828 09:51:36.185536    2196 start.go:293] postStartSetup for "addons-376000" (driver="docker")
	I0828 09:51:36.185544    2196 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0828 09:51:36.185632    2196 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0828 09:51:36.185691    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:36.206108    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:36.298868    2196 ssh_runner.go:195] Run: cat /etc/os-release
	I0828 09:51:36.303389    2196 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0828 09:51:36.303413    2196 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0828 09:51:36.303421    2196 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0828 09:51:36.303431    2196 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0828 09:51:36.303442    2196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1451/.minikube/addons for local assets ...
	I0828 09:51:36.303561    2196 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19529-1451/.minikube/files for local assets ...
	I0828 09:51:36.303644    2196 start.go:296] duration metric: took 118.105963ms for postStartSetup
	I0828 09:51:36.304151    2196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-376000
	I0828 09:51:36.323062    2196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/config.json ...
	I0828 09:51:36.323966    2196 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 09:51:36.324024    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:36.342477    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:36.431159    2196 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0828 09:51:36.436481    2196 start.go:128] duration metric: took 12.177757958s to createHost
	I0828 09:51:36.436498    2196 start.go:83] releasing machines lock for "addons-376000", held for 12.177866841s
	I0828 09:51:36.436572    2196 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-376000
	I0828 09:51:36.455116    2196 ssh_runner.go:195] Run: cat /version.json
	I0828 09:51:36.455192    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:36.455436    2196 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0828 09:51:36.455965    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:36.476441    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:36.476425    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:36.661288    2196 ssh_runner.go:195] Run: systemctl --version
	I0828 09:51:36.665971    2196 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0828 09:51:36.670873    2196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0828 09:51:36.695710    2196 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0828 09:51:36.695790    2196 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0828 09:51:36.719478    2196 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0828 09:51:36.719494    2196 start.go:495] detecting cgroup driver to use...
	I0828 09:51:36.719513    2196 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0828 09:51:36.719622    2196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 09:51:36.736141    2196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0828 09:51:36.746264    2196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0828 09:51:36.756062    2196 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0828 09:51:36.756134    2196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0828 09:51:36.766020    2196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 09:51:36.775853    2196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0828 09:51:36.786765    2196 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0828 09:51:36.797797    2196 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0828 09:51:36.808219    2196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0828 09:51:36.817807    2196 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0828 09:51:36.827966    2196 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0828 09:51:36.839261    2196 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0828 09:51:36.847741    2196 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0828 09:51:36.856461    2196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:36.916694    2196 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0828 09:51:37.016889    2196 start.go:495] detecting cgroup driver to use...
	I0828 09:51:37.016922    2196 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0828 09:51:37.017007    2196 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0828 09:51:37.037525    2196 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0828 09:51:37.037583    2196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0828 09:51:37.052292    2196 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0828 09:51:37.069870    2196 ssh_runner.go:195] Run: which cri-dockerd
	I0828 09:51:37.075352    2196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0828 09:51:37.086327    2196 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0828 09:51:37.112037    2196 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0828 09:51:37.175254    2196 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0828 09:51:37.234773    2196 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0828 09:51:37.234895    2196 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0828 09:51:37.253034    2196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:37.314310    2196 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0828 09:51:37.824134    2196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0828 09:51:37.836399    2196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 09:51:37.847577    2196 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0828 09:51:37.911219    2196 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0828 09:51:37.969456    2196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:38.026820    2196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0828 09:51:38.055296    2196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0828 09:51:38.066394    2196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:38.128565    2196 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0828 09:51:38.207918    2196 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0828 09:51:38.208622    2196 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0828 09:51:38.213451    2196 start.go:563] Will wait 60s for crictl version
	I0828 09:51:38.213536    2196 ssh_runner.go:195] Run: which crictl
	I0828 09:51:38.220217    2196 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0828 09:51:38.257320    2196 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0828 09:51:38.257392    2196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 09:51:38.279592    2196 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0828 09:51:38.354979    2196 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0828 09:51:38.355162    2196 cli_runner.go:164] Run: docker exec -t addons-376000 dig +short host.docker.internal
	I0828 09:51:38.433639    2196 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0828 09:51:38.433896    2196 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0828 09:51:38.438783    2196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 09:51:38.449834    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:38.468466    2196 kubeadm.go:883] updating cluster {Name:addons-376000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0828 09:51:38.468568    2196 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:51:38.468637    2196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 09:51:38.488077    2196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 09:51:38.488095    2196 docker.go:615] Images already preloaded, skipping extraction
	I0828 09:51:38.488206    2196 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0828 09:51:38.508444    2196 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0828 09:51:38.508470    2196 cache_images.go:84] Images are preloaded, skipping loading
	I0828 09:51:38.508484    2196 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ...
	I0828 09:51:38.508578    2196 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-376000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0828 09:51:38.508648    2196 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0828 09:51:38.557087    2196 cni.go:84] Creating CNI manager for ""
	I0828 09:51:38.557109    2196 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 09:51:38.557122    2196 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0828 09:51:38.557144    2196 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-376000 NodeName:addons-376000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0828 09:51:38.557270    2196 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-376000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0828 09:51:38.557342    2196 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0828 09:51:38.566049    2196 binaries.go:44] Found k8s binaries, skipping transfer
	I0828 09:51:38.566165    2196 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0828 09:51:38.575262    2196 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0828 09:51:38.593041    2196 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0828 09:51:38.609347    2196 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0828 09:51:38.626422    2196 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0828 09:51:38.630873    2196 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0828 09:51:38.642872    2196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:38.701232    2196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 09:51:38.733109    2196 certs.go:68] Setting up /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000 for IP: 192.168.49.2
	I0828 09:51:38.733122    2196 certs.go:194] generating shared ca certs ...
	I0828 09:51:38.733134    2196 certs.go:226] acquiring lock for ca certs: {Name:mkc4e123026e887f774a76e4686f1b3b6fccd3ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:38.733328    2196 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19529-1451/.minikube/ca.key
	I0828 09:51:38.899040    2196 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1451/.minikube/ca.crt ...
	I0828 09:51:38.899056    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/ca.crt: {Name:mk2151fbd4ac98f96494b10248aaf13dfdcf3470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:38.899411    2196 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1451/.minikube/ca.key ...
	I0828 09:51:38.899426    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/ca.key: {Name:mkcc228c6760899a02cb68e90c5fbab87dacb44f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:38.899675    2196 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19529-1451/.minikube/proxy-client-ca.key
	I0828 09:51:39.194742    2196 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1451/.minikube/proxy-client-ca.crt ...
	I0828 09:51:39.194764    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/proxy-client-ca.crt: {Name:mk828277b833137901ac71be4187c452bf7e66cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:39.195100    2196 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1451/.minikube/proxy-client-ca.key ...
	I0828 09:51:39.195110    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/proxy-client-ca.key: {Name:mka560e7d65a7a24d6821346d0a4c58a282d08d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:39.195357    2196 certs.go:256] generating profile certs ...
	I0828 09:51:39.195418    2196 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.key
	I0828 09:51:39.195433    2196 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt with IP's: []
	I0828 09:51:39.488881    2196 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt ...
	I0828 09:51:39.488900    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: {Name:mk1d25e9276223b0a89d1a2eea05ca44885e3e32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:39.489295    2196 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.key ...
	I0828 09:51:39.489305    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.key: {Name:mkd1ca9e816f4763f29e77dc4de2121b469cef46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:39.489573    2196 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.key.0baa7484
	I0828 09:51:39.489597    2196 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.crt.0baa7484 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0828 09:51:39.748065    2196 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.crt.0baa7484 ...
	I0828 09:51:39.748082    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.crt.0baa7484: {Name:mk7e7b498998604d889807c8d8c09a22161ceb49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:39.748402    2196 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.key.0baa7484 ...
	I0828 09:51:39.748412    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.key.0baa7484: {Name:mk2a6bc88de8bc03761cc3f3399b872de2eb93d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:39.748876    2196 certs.go:381] copying /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.crt.0baa7484 -> /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.crt
	I0828 09:51:39.749104    2196 certs.go:385] copying /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.key.0baa7484 -> /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.key
	I0828 09:51:39.749300    2196 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/proxy-client.key
	I0828 09:51:39.749321    2196 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/proxy-client.crt with IP's: []
	I0828 09:51:39.927073    2196 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/proxy-client.crt ...
	I0828 09:51:39.927094    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/proxy-client.crt: {Name:mkd80bf9dee94357d57a3fbe8d1b3b20f93ec130 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:39.927487    2196 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/proxy-client.key ...
	I0828 09:51:39.927495    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/proxy-client.key: {Name:mk07c2dc795c289fb729c27c6c959112ee718533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:39.927950    2196 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/ca-key.pem (1675 bytes)
	I0828 09:51:39.927990    2196 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/ca.pem (1078 bytes)
	I0828 09:51:39.928055    2196 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/cert.pem (1123 bytes)
	I0828 09:51:39.928104    2196 certs.go:484] found cert: /Users/jenkins/minikube-integration/19529-1451/.minikube/certs/key.pem (1675 bytes)
	I0828 09:51:39.928654    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0828 09:51:39.951967    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0828 09:51:39.974967    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0828 09:51:39.997896    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0828 09:51:40.020858    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0828 09:51:40.043584    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0828 09:51:40.065427    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0828 09:51:40.088530    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0828 09:51:40.110854    2196 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19529-1451/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0828 09:51:40.133903    2196 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0828 09:51:40.151226    2196 ssh_runner.go:195] Run: openssl version
	I0828 09:51:40.157399    2196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0828 09:51:40.167247    2196 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0828 09:51:40.171343    2196 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 28 16:51 /usr/share/ca-certificates/minikubeCA.pem
	I0828 09:51:40.171403    2196 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0828 09:51:40.178524    2196 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0828 09:51:40.188653    2196 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0828 09:51:40.192756    2196 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0828 09:51:40.192809    2196 kubeadm.go:392] StartCluster: {Name:addons-376000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-376000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 09:51:40.192916    2196 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0828 09:51:40.211743    2196 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0828 09:51:40.220207    2196 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0828 09:51:40.229216    2196 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0828 09:51:40.229265    2196 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0828 09:51:40.238508    2196 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0828 09:51:40.238522    2196 kubeadm.go:157] found existing configuration files:
	
	I0828 09:51:40.238579    2196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0828 09:51:40.247705    2196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0828 09:51:40.247775    2196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0828 09:51:40.257616    2196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0828 09:51:40.266049    2196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0828 09:51:40.266111    2196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0828 09:51:40.274990    2196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0828 09:51:40.283860    2196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0828 09:51:40.283923    2196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0828 09:51:40.292902    2196 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0828 09:51:40.301248    2196 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0828 09:51:40.301308    2196 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0828 09:51:40.309872    2196 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0828 09:51:40.351516    2196 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0828 09:51:40.351586    2196 kubeadm.go:310] [preflight] Running pre-flight checks
	I0828 09:51:40.430583    2196 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0828 09:51:40.430679    2196 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0828 09:51:40.430785    2196 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0828 09:51:40.442159    2196 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0828 09:51:40.484994    2196 out.go:235]   - Generating certificates and keys ...
	I0828 09:51:40.485074    2196 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0828 09:51:40.485156    2196 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0828 09:51:40.595033    2196 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0828 09:51:40.670656    2196 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0828 09:51:40.816593    2196 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0828 09:51:41.014199    2196 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0828 09:51:41.168185    2196 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0828 09:51:41.168331    2196 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-376000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0828 09:51:41.619992    2196 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0828 09:51:41.620107    2196 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-376000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0828 09:51:41.722144    2196 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0828 09:51:41.898915    2196 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0828 09:51:42.054750    2196 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0828 09:51:42.054800    2196 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0828 09:51:42.141607    2196 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0828 09:51:42.215078    2196 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0828 09:51:42.359305    2196 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0828 09:51:42.515445    2196 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0828 09:51:42.642532    2196 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0828 09:51:42.642853    2196 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0828 09:51:42.644927    2196 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0828 09:51:42.686837    2196 out.go:235]   - Booting up control plane ...
	I0828 09:51:42.687025    2196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0828 09:51:42.687130    2196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0828 09:51:42.687239    2196 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0828 09:51:42.687455    2196 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0828 09:51:42.687664    2196 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0828 09:51:42.687757    2196 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0828 09:51:42.741951    2196 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0828 09:51:42.742186    2196 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0828 09:51:43.242632    2196 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.84278ms
	I0828 09:51:43.242712    2196 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0828 09:51:48.245086    2196 kubeadm.go:310] [api-check] The API server is healthy after 5.002319602s
	I0828 09:51:48.253072    2196 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0828 09:51:48.260995    2196 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0828 09:51:48.273514    2196 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0828 09:51:48.273715    2196 kubeadm.go:310] [mark-control-plane] Marking the node addons-376000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0828 09:51:48.279314    2196 kubeadm.go:310] [bootstrap-token] Using token: hazqyx.kr8c4gd8dkt2fhub
	I0828 09:51:48.300727    2196 out.go:235]   - Configuring RBAC rules ...
	I0828 09:51:48.300902    2196 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0828 09:51:48.339989    2196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0828 09:51:48.344476    2196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0828 09:51:48.346680    2196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0828 09:51:48.348679    2196 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0828 09:51:48.350679    2196 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0828 09:51:48.650562    2196 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0828 09:51:49.062820    2196 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0828 09:51:49.649408    2196 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0828 09:51:49.649984    2196 kubeadm.go:310] 
	I0828 09:51:49.650030    2196 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0828 09:51:49.650035    2196 kubeadm.go:310] 
	I0828 09:51:49.650103    2196 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0828 09:51:49.650114    2196 kubeadm.go:310] 
	I0828 09:51:49.650136    2196 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0828 09:51:49.650197    2196 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0828 09:51:49.650243    2196 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0828 09:51:49.650253    2196 kubeadm.go:310] 
	I0828 09:51:49.650297    2196 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0828 09:51:49.650301    2196 kubeadm.go:310] 
	I0828 09:51:49.650342    2196 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0828 09:51:49.650349    2196 kubeadm.go:310] 
	I0828 09:51:49.650391    2196 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0828 09:51:49.650453    2196 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0828 09:51:49.650509    2196 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0828 09:51:49.650514    2196 kubeadm.go:310] 
	I0828 09:51:49.650582    2196 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0828 09:51:49.650642    2196 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0828 09:51:49.650648    2196 kubeadm.go:310] 
	I0828 09:51:49.650714    2196 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token hazqyx.kr8c4gd8dkt2fhub \
	I0828 09:51:49.650800    2196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:15bca1c5000b5f8f5509a8ff62e3fc3eab8dde4fd0f1230f64d06baf7f682e13 \
	I0828 09:51:49.650822    2196 kubeadm.go:310] 	--control-plane 
	I0828 09:51:49.650827    2196 kubeadm.go:310] 
	I0828 09:51:49.650893    2196 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0828 09:51:49.650900    2196 kubeadm.go:310] 
	I0828 09:51:49.651009    2196 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token hazqyx.kr8c4gd8dkt2fhub \
	I0828 09:51:49.651120    2196 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:15bca1c5000b5f8f5509a8ff62e3fc3eab8dde4fd0f1230f64d06baf7f682e13 
	I0828 09:51:49.652307    2196 kubeadm.go:310] W0828 16:51:40.347517    1824 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 09:51:49.652587    2196 kubeadm.go:310] W0828 16:51:40.348068    1824 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0828 09:51:49.652844    2196 kubeadm.go:310] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I0828 09:51:49.652972    2196 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0828 09:51:49.652994    2196 cni.go:84] Creating CNI manager for ""
	I0828 09:51:49.653004    2196 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 09:51:49.695314    2196 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0828 09:51:49.732545    2196 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0828 09:51:49.741855    2196 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0828 09:51:49.757097    2196 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0828 09:51:49.757184    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-376000 minikube.k8s.io/updated_at=2024_08_28T09_51_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216 minikube.k8s.io/name=addons-376000 minikube.k8s.io/primary=true
	I0828 09:51:49.757184    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:49.865779    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:49.875556    2196 ops.go:34] apiserver oom_adj: -16
	I0828 09:51:50.366242    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:50.865790    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:51.366168    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:51.865743    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:52.366955    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:52.865759    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:53.366061    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:53.866120    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:54.365733    2196 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0828 09:51:54.426052    2196 kubeadm.go:1113] duration metric: took 4.669079875s to wait for elevateKubeSystemPrivileges
	I0828 09:51:54.426070    2196 kubeadm.go:394] duration metric: took 14.233701543s to StartCluster
	I0828 09:51:54.426081    2196 settings.go:142] acquiring lock: {Name:mk033992ae8760298f01c60eb8b9afc3224fc58c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:54.426890    2196 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19529-1451/kubeconfig
	I0828 09:51:54.427134    2196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/kubeconfig: {Name:mkd241e8c5e40745b828e6b02b2821b9e1dfe769 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:51:54.427412    2196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0828 09:51:54.427432    2196 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0828 09:51:54.427448    2196 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0828 09:51:54.427575    2196 addons.go:69] Setting helm-tiller=true in profile "addons-376000"
	I0828 09:51:54.427588    2196 addons.go:69] Setting registry=true in profile "addons-376000"
	I0828 09:51:54.427590    2196 config.go:182] Loaded profile config "addons-376000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 09:51:54.427594    2196 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-376000"
	I0828 09:51:54.427599    2196 addons.go:69] Setting default-storageclass=true in profile "addons-376000"
	I0828 09:51:54.427611    2196 addons.go:69] Setting inspektor-gadget=true in profile "addons-376000"
	I0828 09:51:54.427621    2196 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-376000"
	I0828 09:51:54.427622    2196 addons.go:69] Setting metrics-server=true in profile "addons-376000"
	I0828 09:51:54.427644    2196 addons.go:69] Setting volumesnapshots=true in profile "addons-376000"
	I0828 09:51:54.427643    2196 addons.go:69] Setting volcano=true in profile "addons-376000"
	I0828 09:51:54.427647    2196 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-376000"
	I0828 09:51:54.427646    2196 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-376000"
	I0828 09:51:54.427658    2196 addons.go:234] Setting addon metrics-server=true in "addons-376000"
	I0828 09:51:54.427661    2196 addons.go:234] Setting addon volumesnapshots=true in "addons-376000"
	I0828 09:51:54.427667    2196 addons.go:234] Setting addon volcano=true in "addons-376000"
	I0828 09:51:54.427607    2196 addons.go:234] Setting addon helm-tiller=true in "addons-376000"
	I0828 09:51:54.427686    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427628    2196 addons.go:234] Setting addon inspektor-gadget=true in "addons-376000"
	I0828 09:51:54.427693    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427695    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427607    2196 addons.go:234] Setting addon registry=true in "addons-376000"
	I0828 09:51:54.427695    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427738    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427578    2196 addons.go:69] Setting yakd=true in profile "addons-376000"
	I0828 09:51:54.427751    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427591    2196 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-376000"
	I0828 09:51:54.427770    2196 addons.go:234] Setting addon yakd=true in "addons-376000"
	I0828 09:51:54.427614    2196 addons.go:69] Setting storage-provisioner=true in profile "addons-376000"
	I0828 09:51:54.427633    2196 addons.go:69] Setting cloud-spanner=true in profile "addons-376000"
	I0828 09:51:54.427803    2196 addons.go:234] Setting addon storage-provisioner=true in "addons-376000"
	I0828 09:51:54.427631    2196 addons.go:69] Setting ingress=true in profile "addons-376000"
	I0828 09:51:54.427616    2196 addons.go:69] Setting ingress-dns=true in profile "addons-376000"
	I0828 09:51:54.427820    2196 addons.go:234] Setting addon ingress=true in "addons-376000"
	I0828 09:51:54.427833    2196 addons.go:234] Setting addon ingress-dns=true in "addons-376000"
	I0828 09:51:54.427834    2196 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-376000"
	I0828 09:51:54.427845    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427853    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427858    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.428099    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.428169    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.428225    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.428242    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.428243    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.427805    2196 addons.go:234] Setting addon cloud-spanner=true in "addons-376000"
	I0828 09:51:54.428296    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427706    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.428304    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.428313    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.427790    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.428353    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.427847    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.427643    2196 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-376000"
	I0828 09:51:54.427616    2196 addons.go:69] Setting gcp-auth=true in profile "addons-376000"
	I0828 09:51:54.428336    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.429284    2196 mustload.go:65] Loading cluster: addons-376000
	I0828 09:51:54.428243    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.430581    2196 config.go:182] Loaded profile config "addons-376000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 09:51:54.430630    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.430856    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.430877    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.431030    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.431553    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.432310    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.450253    2196 out.go:177] * Verifying Kubernetes components...
	I0828 09:51:54.523199    2196 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0828 09:51:54.530902    2196 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0828 09:51:54.536249    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.536800    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.573361    2196 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0828 09:51:54.541307    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "5000/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.544740    2196 addons.go:234] Setting addon default-storageclass=true in "addons-376000"
	I0828 09:51:54.596087    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.596650    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.544740    2196 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-376000"
	I0828 09:51:54.615076    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:51:54.615425    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:51:54.616272    2196 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0828 09:51:54.616348    2196 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0828 09:51:54.616673    2196 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0828 09:51:54.616506    2196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0828 09:51:54.654813    2196 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0828 09:51:54.616481    2196 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0828 09:51:54.616526    2196 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0828 09:51:54.616488    2196 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0828 09:51:54.616607    2196 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0828 09:51:54.595378    2196 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0828 09:51:54.653814    2196 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0828 09:51:54.653896    2196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0828 09:51:54.654129    2196 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0828 09:51:54.655297    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.691838    2196 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0828 09:51:54.692508    2196 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0828 09:51:54.728350    2196 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0828 09:51:54.728873    2196 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0828 09:51:54.728961    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0828 09:51:54.732431    2196 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0828 09:51:54.830350    2196 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0828 09:51:54.772896    2196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 09:51:54.851570    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0828 09:51:54.773372    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.809259    2196 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0828 09:51:54.851651    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0828 09:51:54.810150    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.851688    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.830508    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.851139    2196 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0828 09:51:54.872733    2196 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0828 09:51:54.851206    2196 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 09:51:54.851740    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.909684    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0828 09:51:54.855565    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:54.872121    2196 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0828 09:51:54.909891    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.872885    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.909892    2196 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0828 09:51:54.908959    2196 out.go:201] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                      │
	│    Registry addon with docker driver uses port 49351 please use that instead of default port 5000    │
	│                                                                                                      │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0828 09:51:54.911252    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:54.929942    2196 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0828 09:51:54.929949    2196 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0828 09:51:54.929942    2196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 09:51:54.950939    2196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0828 09:51:54.950938    2196 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0828 09:51:55.020227    2196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0828 09:51:55.020247    2196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0828 09:51:55.037331    2196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0828 09:51:55.037356    2196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0828 09:51:55.048244    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.053968    2196 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0828 09:51:55.053982    2196 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0828 09:51:55.066759    2196 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0828 09:51:55.066803    2196 out.go:177] * For more information see: https://minikube.sigs.k8s.io/docs/drivers/docker
	I0828 09:51:55.066902    2196 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 09:51:55.089665    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0828 09:51:55.071219    2196 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0828 09:51:55.089707    2196 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0828 09:51:55.089023    2196 out.go:177]   - Using image docker.io/busybox:stable
	I0828 09:51:55.089780    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:55.092018    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.106114    2196 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 09:51:55.110282    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0828 09:51:55.111523    2196 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 09:51:55.111616    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0828 09:51:55.111972    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:55.130933    2196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0828 09:51:55.130962    2196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 09:51:55.134362    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.134458    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 09:51:55.150182    2196 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0828 09:51:55.152565    2196 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0828 09:51:55.152198    2196 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 09:51:55.152618    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0828 09:51:55.152726    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:55.155990    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.176242    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.176245    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.177851    2196 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0828 09:51:55.177867    2196 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0828 09:51:55.193961    2196 out.go:177]   - Using image docker.io/registry:2.8.3
	I0828 09:51:55.203584    2196 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0828 09:51:55.203601    2196 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0828 09:51:55.230846    2196 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0828 09:51:55.233481    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.233488    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.251840    2196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0828 09:51:55.274250    2196 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0828 09:51:55.274268    2196 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0828 09:51:55.278069    2196 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0828 09:51:55.278091    2196 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0828 09:51:55.288621    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0828 09:51:55.288669    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0828 09:51:55.295997    2196 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0828 09:51:55.296187    2196 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 09:51:55.296752    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0828 09:51:55.296899    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:55.299993    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.317024    2196 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0828 09:51:55.319404    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.359445    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.359583    2196 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0828 09:51:55.359610    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0828 09:51:55.359816    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:55.374855    2196 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 09:51:55.374923    2196 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0828 09:51:55.377211    2196 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0828 09:51:55.377258    2196 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0828 09:51:55.382795    2196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0828 09:51:55.420547    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.442008    2196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0828 09:51:55.467246    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.467532    2196 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0828 09:51:55.467548    2196 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0828 09:51:55.471451    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0828 09:51:55.480490    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0828 09:51:55.483927    2196 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0828 09:51:55.520954    2196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0828 09:51:55.520968    2196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0828 09:51:55.521052    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:55.539262    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:51:55.570331    2196 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0828 09:51:55.570360    2196 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0828 09:51:55.580472    2196 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.049560178s)
	I0828 09:51:55.580509    2196 start.go:971] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0828 09:51:55.580643    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" addons-376000
	I0828 09:51:55.601240    2196 node_ready.go:35] waiting up to 6m0s for node "addons-376000" to be "Ready" ...
	I0828 09:51:55.668205    2196 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0828 09:51:55.668333    2196 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0828 09:51:55.680614    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0828 09:51:55.682938    2196 node_ready.go:49] node "addons-376000" has status "Ready":"True"
	I0828 09:51:55.682953    2196 node_ready.go:38] duration metric: took 81.677939ms for node "addons-376000" to be "Ready" ...
	I0828 09:51:55.682960    2196 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 09:51:55.772440    2196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0828 09:51:55.772478    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0828 09:51:55.777070    2196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fn4jj" in "kube-system" namespace to be "Ready" ...
	I0828 09:51:55.873551    2196 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0828 09:51:55.873647    2196 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0828 09:51:55.874257    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0828 09:51:55.878058    2196 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 09:51:55.878097    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0828 09:51:55.975434    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0828 09:51:55.978401    2196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0828 09:51:55.978428    2196 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0828 09:51:56.069864    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0828 09:51:56.076221    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0828 09:51:56.179335    2196 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0828 09:51:56.179355    2196 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0828 09:51:56.180666    2196 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0828 09:51:56.180680    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0828 09:51:56.269843    2196 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-376000" context rescaled to 1 replicas
	I0828 09:51:56.273864    2196 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 09:51:56.273884    2196 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0828 09:51:56.373053    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0828 09:51:56.473938    2196 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0828 09:51:56.473993    2196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0828 09:51:56.478375    2196 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0828 09:51:56.478432    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0828 09:51:56.567270    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0828 09:51:56.569231    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0828 09:51:56.775388    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0828 09:51:56.882157    2196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0828 09:51:56.882183    2196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0828 09:51:57.265125    2196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0828 09:51:57.265185    2196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0828 09:51:57.572051    2196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0828 09:51:57.572069    2196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0828 09:51:57.878186    2196 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0828 09:51:57.878205    2196 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0828 09:51:57.978873    2196 pod_ready.go:103] pod "coredns-6f6b679f8f-fn4jj" in "kube-system" namespace has status "Ready":"False"
	I0828 09:51:58.178975    2196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0828 09:51:58.179058    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0828 09:51:58.676531    2196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0828 09:51:58.676566    2196 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0828 09:51:59.074372    2196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0828 09:51:59.074390    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0828 09:51:59.477177    2196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0828 09:51:59.477204    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0828 09:51:59.970033    2196 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 09:51:59.970058    2196 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0828 09:51:59.981538    2196 pod_ready.go:103] pod "coredns-6f6b679f8f-fn4jj" in "kube-system" namespace has status "Ready":"False"
	I0828 09:52:00.186913    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0828 09:52:02.183310    2196 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0828 09:52:02.183407    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:52:02.201886    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:52:02.380043    2196 pod_ready.go:103] pod "coredns-6f6b679f8f-fn4jj" in "kube-system" namespace has status "Ready":"False"
	I0828 09:52:02.876358    2196 pod_ready.go:93] pod "coredns-6f6b679f8f-fn4jj" in "kube-system" namespace has status "Ready":"True"
	I0828 09:52:02.876460    2196 pod_ready.go:82] duration metric: took 7.09957473s for pod "coredns-6f6b679f8f-fn4jj" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:02.876494    2196 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-kllsn" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:02.981847    2196 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0828 09:52:03.269474    2196 addons.go:234] Setting addon gcp-auth=true in "addons-376000"
	I0828 09:52:03.269530    2196 host.go:66] Checking if "addons-376000" exists ...
	I0828 09:52:03.270177    2196 cli_runner.go:164] Run: docker container inspect addons-376000 --format={{.State.Status}}
	I0828 09:52:03.296119    2196 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0828 09:52:03.296209    2196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-376000
	I0828 09:52:03.313880    2196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49353 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/addons-376000/id_rsa Username:docker}
	I0828 09:52:03.469674    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.317395021s)
	I0828 09:52:03.469802    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.18136154s)
	W0828 09:52:03.469802    2196 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 09:52:03.469846    2196 retry.go:31] will retry after 131.343091ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0828 09:52:03.469876    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.181487663s)
	I0828 09:52:03.469927    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (7.998700498s)
	I0828 09:52:03.470043    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.98977181s)
	I0828 09:52:03.470090    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.789676006s)
	I0828 09:52:03.470214    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.596173061s)
	I0828 09:52:03.470288    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.495043188s)
	W0828 09:52:03.574374    2196 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0828 09:52:03.603006    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0828 09:52:04.480050    2196 pod_ready.go:93] pod "coredns-6f6b679f8f-kllsn" in "kube-system" namespace has status "Ready":"True"
	I0828 09:52:04.480070    2196 pod_ready.go:82] duration metric: took 1.603615541s for pod "coredns-6f6b679f8f-kllsn" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.480083    2196 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-376000" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.571712    2196 pod_ready.go:93] pod "etcd-addons-376000" in "kube-system" namespace has status "Ready":"True"
	I0828 09:52:04.571731    2196 pod_ready.go:82] duration metric: took 91.643265ms for pod "etcd-addons-376000" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.571741    2196 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-376000" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.677384    2196 pod_ready.go:93] pod "kube-apiserver-addons-376000" in "kube-system" namespace has status "Ready":"True"
	I0828 09:52:04.677483    2196 pod_ready.go:82] duration metric: took 105.732491ms for pod "kube-apiserver-addons-376000" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.677499    2196 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-376000" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.765951    2196 pod_ready.go:93] pod "kube-controller-manager-addons-376000" in "kube-system" namespace has status "Ready":"True"
	I0828 09:52:04.765979    2196 pod_ready.go:82] duration metric: took 88.467678ms for pod "kube-controller-manager-addons-376000" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.765999    2196 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8bjfx" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.778830    2196 pod_ready.go:93] pod "kube-proxy-8bjfx" in "kube-system" namespace has status "Ready":"True"
	I0828 09:52:04.778850    2196 pod_ready.go:82] duration metric: took 12.841163ms for pod "kube-proxy-8bjfx" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.778865    2196 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-376000" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.865057    2196 pod_ready.go:93] pod "kube-scheduler-addons-376000" in "kube-system" namespace has status "Ready":"True"
	I0828 09:52:04.865074    2196 pod_ready.go:82] duration metric: took 86.202379ms for pod "kube-scheduler-addons-376000" in "kube-system" namespace to be "Ready" ...
	I0828 09:52:04.865083    2196 pod_ready.go:39] duration metric: took 9.182395526s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0828 09:52:04.865110    2196 api_server.go:52] waiting for apiserver process to appear ...
	I0828 09:52:04.865182    2196 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 09:52:05.466244    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.396634149s)
	I0828 09:52:05.466275    2196 addons.go:475] Verifying addon ingress=true in "addons-376000"
	I0828 09:52:05.491439    2196 out.go:177] * Verifying ingress addon...
	I0828 09:52:05.535953    2196 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0828 09:52:05.567260    2196 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0828 09:52:05.567278    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:06.080422    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:06.576266    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:07.070303    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:07.581144    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:08.072721    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:08.571979    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:08.768594    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.692734821s)
	I0828 09:52:08.768716    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.396002552s)
	I0828 09:52:08.768789    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (12.201863854s)
	I0828 09:52:08.768920    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.200019762s)
	I0828 09:52:08.768967    2196 addons.go:475] Verifying addon metrics-server=true in "addons-376000"
	I0828 09:52:08.769011    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.993961865s)
	I0828 09:52:08.769046    2196 addons.go:475] Verifying addon registry=true in "addons-376000"
	I0828 09:52:08.797793    2196 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-376000 service yakd-dashboard -n yakd-dashboard
	
	I0828 09:52:08.840186    2196 out.go:177] * Verifying registry addon...
	I0828 09:52:08.883795    2196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0828 09:52:08.887709    2196 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0828 09:52:08.887725    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:09.069656    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:09.389214    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:09.568470    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:09.568957    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.382265114s)
	I0828 09:52:09.568988    2196 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-376000"
	I0828 09:52:09.569058    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.966197595s)
	I0828 09:52:09.569086    2196 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.273136027s)
	I0828 09:52:09.569105    2196 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.704057133s)
	I0828 09:52:09.569127    2196 api_server.go:72] duration metric: took 15.142132886s to wait for apiserver process to appear ...
	I0828 09:52:09.569318    2196 api_server.go:88] waiting for apiserver healthz status ...
	I0828 09:52:09.569401    2196 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:49352/healthz ...
	I0828 09:52:09.612999    2196 api_server.go:279] https://127.0.0.1:49352/healthz returned 200:
	ok
	I0828 09:52:09.614367    2196 api_server.go:141] control plane version: v1.31.0
	I0828 09:52:09.614379    2196 api_server.go:131] duration metric: took 45.050556ms to wait for apiserver health ...
	I0828 09:52:09.614384    2196 system_pods.go:43] waiting for kube-system pods to appear ...
	I0828 09:52:09.643100    2196 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0828 09:52:09.664088    2196 out.go:177] * Verifying csi-hostpath-driver addon...
	I0828 09:52:09.739167    2196 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0828 09:52:09.740127    2196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0828 09:52:09.760044    2196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0828 09:52:09.760083    2196 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0828 09:52:09.768982    2196 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0828 09:52:09.769000    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:09.775180    2196 system_pods.go:59] 18 kube-system pods found
	I0828 09:52:09.775241    2196 system_pods.go:61] "coredns-6f6b679f8f-kllsn" [2b6122dc-536d-461c-9408-9f9ac40fdf71] Running
	I0828 09:52:09.775252    2196 system_pods.go:61] "csi-hostpath-attacher-0" [28d2e83a-6711-4800-93bf-438e00b066dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 09:52:09.775256    2196 system_pods.go:61] "csi-hostpath-resizer-0" [adef230f-2a07-4807-a718-c22c96ad343d] Pending
	I0828 09:52:09.775265    2196 system_pods.go:61] "csi-hostpathplugin-4k5dc" [727322e3-6d5c-4dcb-a95e-4d50567b1a85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 09:52:09.775271    2196 system_pods.go:61] "etcd-addons-376000" [0bcf2bee-a9f1-4601-afe9-0c701c5d36ca] Running
	I0828 09:52:09.775278    2196 system_pods.go:61] "kube-apiserver-addons-376000" [240ab2fb-4b6e-4ade-ac61-1e9913686fcf] Running
	I0828 09:52:09.775285    2196 system_pods.go:61] "kube-controller-manager-addons-376000" [ccf72156-a6d3-49cd-8070-7551bba571a0] Running
	I0828 09:52:09.775298    2196 system_pods.go:61] "kube-ingress-dns-minikube" [7306f9c6-5e5b-4d20-8a43-4760379551b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0828 09:52:09.775309    2196 system_pods.go:61] "kube-proxy-8bjfx" [f44d2830-4a83-4d4f-a727-644053173457] Running
	I0828 09:52:09.775315    2196 system_pods.go:61] "kube-scheduler-addons-376000" [70ab7578-d192-49dd-b74e-7af499bb9799] Running
	I0828 09:52:09.775325    2196 system_pods.go:61] "metrics-server-84c5f94fbc-nvqvh" [2fae3987-7824-4b7c-8daf-259b4f25deb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 09:52:09.775335    2196 system_pods.go:61] "nvidia-device-plugin-daemonset-4pp2v" [0547e8e7-d64c-4673-b30d-2c858c7224a3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0828 09:52:09.775345    2196 system_pods.go:61] "registry-6fb4cdfc84-hqg5k" [9a0b8457-455b-4784-9261-d2e66876448f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0828 09:52:09.775352    2196 system_pods.go:61] "registry-proxy-mbq7t" [03026f32-0ed6-4356-af59-209ab357edb5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 09:52:09.775358    2196 system_pods.go:61] "snapshot-controller-56fcc65765-xt2ns" [cfdfd478-64fa-44b9-8cc5-a761f47d7f31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 09:52:09.775364    2196 system_pods.go:61] "snapshot-controller-56fcc65765-zjv5r" [96c675c1-2ba5-47b4-bbc4-f4a67107fd97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 09:52:09.775393    2196 system_pods.go:61] "storage-provisioner" [128495ca-86c4-4960-bdfd-4773e5c67a3c] Running
	I0828 09:52:09.775408    2196 system_pods.go:61] "tiller-deploy-b48cc5f79-sh2dj" [aef15770-b038-42fc-8810-27971a2dac93] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0828 09:52:09.775432    2196 system_pods.go:74] duration metric: took 161.045646ms to wait for pod list to return data ...
	I0828 09:52:09.775443    2196 default_sa.go:34] waiting for default service account to be created ...
	I0828 09:52:09.779543    2196 default_sa.go:45] found service account: "default"
	I0828 09:52:09.779563    2196 default_sa.go:55] duration metric: took 4.110435ms for default service account to be created ...
	I0828 09:52:09.779572    2196 system_pods.go:116] waiting for k8s-apps to be running ...
	I0828 09:52:09.786902    2196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0828 09:52:09.786923    2196 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0828 09:52:09.793767    2196 system_pods.go:86] 18 kube-system pods found
	I0828 09:52:09.793797    2196 system_pods.go:89] "coredns-6f6b679f8f-kllsn" [2b6122dc-536d-461c-9408-9f9ac40fdf71] Running
	I0828 09:52:09.793815    2196 system_pods.go:89] "csi-hostpath-attacher-0" [28d2e83a-6711-4800-93bf-438e00b066dc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0828 09:52:09.793824    2196 system_pods.go:89] "csi-hostpath-resizer-0" [adef230f-2a07-4807-a718-c22c96ad343d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0828 09:52:09.793830    2196 system_pods.go:89] "csi-hostpathplugin-4k5dc" [727322e3-6d5c-4dcb-a95e-4d50567b1a85] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0828 09:52:09.793834    2196 system_pods.go:89] "etcd-addons-376000" [0bcf2bee-a9f1-4601-afe9-0c701c5d36ca] Running
	I0828 09:52:09.793838    2196 system_pods.go:89] "kube-apiserver-addons-376000" [240ab2fb-4b6e-4ade-ac61-1e9913686fcf] Running
	I0828 09:52:09.793842    2196 system_pods.go:89] "kube-controller-manager-addons-376000" [ccf72156-a6d3-49cd-8070-7551bba571a0] Running
	I0828 09:52:09.793846    2196 system_pods.go:89] "kube-ingress-dns-minikube" [7306f9c6-5e5b-4d20-8a43-4760379551b6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0828 09:52:09.793850    2196 system_pods.go:89] "kube-proxy-8bjfx" [f44d2830-4a83-4d4f-a727-644053173457] Running
	I0828 09:52:09.793854    2196 system_pods.go:89] "kube-scheduler-addons-376000" [70ab7578-d192-49dd-b74e-7af499bb9799] Running
	I0828 09:52:09.793861    2196 system_pods.go:89] "metrics-server-84c5f94fbc-nvqvh" [2fae3987-7824-4b7c-8daf-259b4f25deb3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0828 09:52:09.793870    2196 system_pods.go:89] "nvidia-device-plugin-daemonset-4pp2v" [0547e8e7-d64c-4673-b30d-2c858c7224a3] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0828 09:52:09.793877    2196 system_pods.go:89] "registry-6fb4cdfc84-hqg5k" [9a0b8457-455b-4784-9261-d2e66876448f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0828 09:52:09.793886    2196 system_pods.go:89] "registry-proxy-mbq7t" [03026f32-0ed6-4356-af59-209ab357edb5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0828 09:52:09.793897    2196 system_pods.go:89] "snapshot-controller-56fcc65765-xt2ns" [cfdfd478-64fa-44b9-8cc5-a761f47d7f31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 09:52:09.793906    2196 system_pods.go:89] "snapshot-controller-56fcc65765-zjv5r" [96c675c1-2ba5-47b4-bbc4-f4a67107fd97] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0828 09:52:09.793912    2196 system_pods.go:89] "storage-provisioner" [128495ca-86c4-4960-bdfd-4773e5c67a3c] Running
	I0828 09:52:09.793920    2196 system_pods.go:89] "tiller-deploy-b48cc5f79-sh2dj" [aef15770-b038-42fc-8810-27971a2dac93] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0828 09:52:09.793931    2196 system_pods.go:126] duration metric: took 14.351109ms to wait for k8s-apps to be running ...
	I0828 09:52:09.793945    2196 system_svc.go:44] waiting for kubelet service to be running ....
	I0828 09:52:09.794346    2196 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 09:52:09.872295    2196 system_svc.go:56] duration metric: took 78.347952ms WaitForService to wait for kubelet
	I0828 09:52:09.872335    2196 kubeadm.go:582] duration metric: took 15.445347864s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0828 09:52:09.872365    2196 node_conditions.go:102] verifying NodePressure condition ...
	I0828 09:52:09.874349    2196 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 09:52:09.874374    2196 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0828 09:52:09.876255    2196 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0828 09:52:09.876280    2196 node_conditions.go:123] node cpu capacity is 12
	I0828 09:52:09.876297    2196 node_conditions.go:105] duration metric: took 3.921145ms to run NodePressure ...
	I0828 09:52:09.876306    2196 start.go:241] waiting for startup goroutines ...
	I0828 09:52:09.887997    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:09.903589    2196 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0828 09:52:10.068984    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:10.268671    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:10.388595    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:10.570624    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:10.768636    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:10.890163    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:11.067236    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:11.175967    2196 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.2723863s)
	I0828 09:52:11.176982    2196 addons.go:475] Verifying addon gcp-auth=true in "addons-376000"
	I0828 09:52:11.200838    2196 out.go:177] * Verifying gcp-auth addon...
	I0828 09:52:11.276848    2196 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0828 09:52:11.279202    2196 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 09:52:11.279854    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:11.386887    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:11.539800    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:11.744236    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:11.887806    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:12.040120    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:12.244972    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:12.386960    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:12.540484    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:12.744768    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:12.887800    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:13.067742    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:13.266563    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:13.388223    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:13.541449    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:13.766542    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:13.888043    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:14.041309    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:14.268067    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:14.387175    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:14.540369    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:14.744621    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:14.888273    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:15.039542    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:15.244064    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:15.389018    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:15.659764    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:15.746513    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:15.897176    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:16.039672    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:16.247058    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:16.386601    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:16.539327    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:16.744583    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:16.887616    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:17.039371    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:17.245411    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:17.387568    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:17.539939    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:17.746950    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:17.887755    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:18.047211    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:18.245813    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:18.386440    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:18.538910    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:18.743855    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:18.887615    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:19.040553    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:19.245602    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:19.386645    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:19.539336    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:19.743926    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:19.887576    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:20.039210    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:20.243818    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:20.387264    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:20.541659    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:20.743921    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:20.888137    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:21.066487    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:21.244354    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:21.387941    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:21.539172    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:21.743886    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:21.887839    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:22.040409    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:22.244403    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:22.386367    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:22.540207    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:22.745459    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:22.886716    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:23.039487    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:23.246422    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:23.387922    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:23.539936    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:23.744398    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:23.887263    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:24.040017    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:24.243937    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:24.386576    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:24.540856    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:24.744722    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:24.887745    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:25.039282    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:25.265386    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:25.387778    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:25.538900    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:25.744065    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:25.888207    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:26.038930    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:26.243997    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:26.387360    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:26.540054    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:26.743756    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:26.887313    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:27.039882    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:27.243320    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:27.387232    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:27.538850    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:27.744292    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:27.887570    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:28.039621    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:28.266999    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:28.387066    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:28.539458    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:28.743776    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:28.887068    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:29.038743    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:29.243772    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:29.386945    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:29.539448    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:29.743191    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:29.887010    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:30.039565    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:30.243537    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:30.387358    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:30.538719    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:30.743665    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:30.886525    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:31.038940    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:31.243147    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:31.386397    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:31.539058    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:31.743450    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:31.888499    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:32.038728    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:32.243232    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:32.387044    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:32.539077    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:32.743547    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:32.886727    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:33.156967    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:33.257962    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:33.387694    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:33.538962    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:33.743432    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:33.889191    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:34.065491    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:34.243098    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:34.386331    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:34.539536    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:34.743561    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:34.886836    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:35.064360    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:35.243343    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:35.386678    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:35.539639    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:35.743813    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:35.889234    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:36.041479    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:36.245478    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:36.386209    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:36.538382    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:36.745597    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:36.886966    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:37.039400    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:37.244072    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:37.387947    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:37.538879    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:37.742969    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:37.886703    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:38.038878    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:38.244060    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:38.386400    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:38.538986    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:38.744680    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:38.886846    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:39.038763    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:39.243608    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:39.386797    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:39.538843    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:39.743660    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:39.886686    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:40.038562    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:40.243208    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:40.387227    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:40.538996    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:40.744476    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:40.886186    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:41.038669    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:41.242789    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:41.386508    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:41.539285    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:41.743120    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:41.886698    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:42.039954    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:42.243524    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:42.387212    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:42.538516    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:42.743753    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:42.886708    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:43.039241    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:43.266367    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:43.388018    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:43.564704    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:43.763513    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:43.887282    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:44.039512    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:44.243147    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:44.387045    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:44.538725    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:44.765132    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:44.886387    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:45.038890    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:45.265276    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:45.386554    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:45.647478    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:45.748820    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:45.886263    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:46.038785    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:46.244256    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:46.385718    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:46.538760    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:46.742594    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:46.886117    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:47.039063    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:47.244249    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:47.386355    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:47.538028    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:47.742945    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:47.886369    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:48.038949    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:48.243480    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:48.385617    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:48.539339    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:48.744125    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:48.887343    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:49.064326    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:49.243994    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:49.385844    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:49.539255    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:49.743053    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:49.886743    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:50.038637    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:50.243133    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:50.386909    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:50.539017    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:50.743928    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:50.887168    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:51.038956    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:51.243356    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:51.386946    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:51.538887    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:51.742946    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:51.886061    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:52.039173    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:52.266470    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:52.386321    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:52.539313    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:52.742609    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:52.886595    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:53.038516    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:53.279773    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:53.385981    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:53.540073    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:53.743272    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:53.886472    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0828 09:52:54.038074    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:54.242727    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:54.386297    2196 kapi.go:107] duration metric: took 45.503890741s to wait for kubernetes.io/minikube-addons=registry ...
	I0828 09:52:54.539062    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:54.743475    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:55.065194    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:55.244490    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:55.538997    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:55.743051    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:56.039005    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:56.242888    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:56.538111    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:56.743112    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:57.038523    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:57.242844    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:57.538395    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:57.766923    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:58.215719    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:58.317717    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:58.540686    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:58.742158    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:59.037548    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:59.245938    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:52:59.538546    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:52:59.743942    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:00.038417    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:00.243259    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:00.539269    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:00.743036    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:01.038637    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:01.266335    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:01.538374    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:01.744591    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:02.037467    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:02.242115    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:02.537685    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:02.743139    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:03.037967    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:03.263459    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:03.563975    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:03.744166    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:04.037747    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:04.243286    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:04.538642    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:04.742490    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:05.063273    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:05.242842    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:05.565908    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:05.765679    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:06.038046    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:06.264520    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:06.566483    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:06.765986    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:07.037764    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:07.242503    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:07.563174    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:07.743887    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:08.037111    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:08.242166    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:08.539682    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:08.742700    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:09.037646    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:09.242445    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:09.538248    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:09.768676    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:10.037357    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:10.243272    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:10.539351    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:10.743722    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:11.037937    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:11.242148    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:11.538773    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:11.742077    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:12.038463    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:12.262782    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:12.537892    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:12.742109    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:13.037339    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:13.243934    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:13.537228    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:13.744225    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:14.038533    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:14.263484    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:14.537619    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:14.743538    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:15.037133    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:15.243726    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:15.537164    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:15.743557    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:16.037139    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:16.242505    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:16.537861    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:16.742276    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:17.040092    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:17.242134    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:17.538133    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:17.742340    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:18.038819    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:18.241689    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:18.538073    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:18.743783    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:19.037910    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:19.263247    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:19.537367    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:19.743675    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:20.038500    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:20.245165    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:20.537732    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:20.743346    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:21.038152    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:21.242921    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:21.538058    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:21.742473    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:22.036724    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:22.264839    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:22.537974    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:22.765570    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:23.064065    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:23.242118    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:23.537094    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:23.741746    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:24.039243    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:24.263244    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:24.538072    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:24.742042    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:25.064901    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:25.243567    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:25.537385    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:25.741598    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:26.041677    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:26.332073    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:26.562885    2196 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0828 09:53:26.742924    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:27.037170    2196 kapi.go:107] duration metric: took 1m21.503703988s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0828 09:53:27.241791    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:27.742475    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:28.243028    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:28.742156    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:29.250926    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:29.742720    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:30.244563    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:30.741281    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:31.241661    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:31.741765    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:32.242148    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:32.741372    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:33.243226    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:33.742457    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:34.241504    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:34.278569    2196 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0828 09:53:34.278592    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:34.742576    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:34.777211    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:35.242222    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:35.280476    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:35.742385    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:35.778007    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:36.241786    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:36.277740    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:36.741469    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:36.777850    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:37.242689    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:37.276999    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:37.743749    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:37.777103    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:38.243664    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:38.278197    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:38.742521    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:38.776902    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:39.243375    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0828 09:53:39.342847    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:39.742867    2196 kapi.go:107] duration metric: took 1m30.005489266s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0828 09:53:39.777530    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:40.277536    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:40.777385    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:41.277796    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:41.777235    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:42.277334    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:42.777645    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:43.277614    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:43.776882    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:44.277662    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:44.777617    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:45.277608    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:45.776857    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:46.277077    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:46.777615    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:47.277025    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:47.776905    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:48.277837    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:48.778239    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:49.277389    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:49.776832    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:50.277521    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:50.776881    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:51.277432    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:51.777614    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:52.277046    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:52.776822    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:53.276405    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:53.776223    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:54.277148    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:54.777683    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:55.276538    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:55.776338    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:56.277632    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:56.777161    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:57.276312    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:57.777493    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:58.277107    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:58.776549    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:59.277015    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:53:59.777256    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:00.276434    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:00.777127    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:01.276762    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:01.776578    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:02.276715    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:02.776803    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:03.276648    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:03.776177    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:04.276748    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:04.776291    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:05.277815    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:05.776241    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:06.276769    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:06.777224    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:07.276206    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:07.777788    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:08.277177    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:08.776793    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:09.275944    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:09.776308    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:10.276042    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:10.776201    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:11.275858    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:11.776193    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:12.276359    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:12.777133    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:13.276557    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:13.776332    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:14.276348    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:14.777079    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:15.276672    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:15.776430    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:16.275921    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:16.776902    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:17.276235    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:17.775950    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:18.276134    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:18.775960    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:19.276433    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:19.776947    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:20.276166    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:20.776067    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:21.275916    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:21.776034    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:22.276252    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:22.776688    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:23.275486    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:23.775591    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:24.275582    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:24.776360    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:25.275618    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:25.775445    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:26.276090    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:26.776371    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:27.275826    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:27.776083    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:28.276005    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:28.776389    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:29.275338    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:29.775297    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:30.275978    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:30.775825    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:31.275950    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:31.775361    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:32.275316    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:32.775640    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:33.275259    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:33.776200    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:34.275950    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:34.776242    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:35.275699    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:35.776039    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:36.275881    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:36.776085    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:37.275199    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:37.775471    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:38.276074    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:38.776157    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:39.277171    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:39.775393    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:40.275945    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:40.776258    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:41.275863    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:41.775813    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:42.277602    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:42.776715    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:43.278099    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:43.776220    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:44.276635    2196 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0828 09:54:44.778738    2196 kapi.go:107] duration metric: took 2m33.506574748s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0828 09:54:44.801559    2196 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-376000 cluster.
	I0828 09:54:44.823358    2196 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0828 09:54:44.846391    2196 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0828 09:54:44.868405    2196 out.go:177] * Enabled addons: storage-provisioner, cloud-spanner, helm-tiller, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, volcano, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0828 09:54:44.889307    2196 addons.go:510] duration metric: took 2m50.467067809s for enable addons: enabled=[storage-provisioner cloud-spanner helm-tiller nvidia-device-plugin ingress-dns storage-provisioner-rancher volcano inspektor-gadget metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0828 09:54:44.889337    2196 start.go:246] waiting for cluster config update ...
	I0828 09:54:44.889356    2196 start.go:255] writing updated cluster config ...
	I0828 09:54:44.912391    2196 ssh_runner.go:195] Run: rm -f paused
	I0828 09:54:44.964618    2196 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0828 09:54:45.001412    2196 out.go:201] 
	W0828 09:54:45.022285    2196 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0828 09:54:45.043362    2196 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0828 09:54:45.138270    2196 out.go:177] * Done! kubectl is now configured to use "addons-376000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 28 17:04:21 addons-376000 dockerd[1231]: time="2024-08-28T17:04:21.402648216Z" level=info msg="ignoring event" container=b8680fff8c887d519b0cd359b3c39624464a32f4fa08ba96799c05df07805bd2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:21 addons-376000 dockerd[1231]: time="2024-08-28T17:04:21.505248364Z" level=info msg="ignoring event" container=5d59a68f7c4c8e98fac13e87d9b0cd177347aa4a2b3ec4cf7b4827954282d764 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.132839542Z" level=info msg="ignoring event" container=e6ca21cf4ef9a5480d59d422fdb67498130d7bf58fba6eca89e7d146f476c9a8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.225044318Z" level=info msg="ignoring event" container=88aaf8b7f17d72299a6ace95836c4dbaa06846ee847f88ba280e5b67ee58f510 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.227834954Z" level=info msg="ignoring event" container=17f1b54fb9a422de174e02713f2e618dc9fa56cccbaafcaabbe6c48ef80f6d3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.229596625Z" level=info msg="ignoring event" container=af6198911bcd2c190fd1865734c58ee88b55405ba694ac7aba8e16685fccc256 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.229677490Z" level=info msg="ignoring event" container=b8bd8a1a7dda0cce9816b58c05c54f85e45300a0a5f8301f8dba085bbdaf829f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.231709054Z" level=info msg="ignoring event" container=02e3832439a99f9a16392338f711aeed6113513e317c84df7af76209c5549d39 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.316956780Z" level=info msg="ignoring event" container=b38a64cb1410f9d9a2f79f21c5eeaf817971aa251efd0e25d0de5f4d5ff12c04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.423146044Z" level=info msg="ignoring event" container=aed58f2abb7355dad08425b1565d2b71aef79846eba232af06451cfc77af6212 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.522030359Z" level=info msg="ignoring event" container=ab41b32adb54d8e81e1cfe646385f233feaacc976753d8b3bcc92e2e57e2765c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.552881363Z" level=info msg="ignoring event" container=f6bcd7fceaadbfae3f6203d1b5e6d44de70566b6359e784f2d1a5b96cc479c0c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:23 addons-376000 dockerd[1231]: time="2024-08-28T17:04:23.557930671Z" level=info msg="ignoring event" container=555cbe30c92655097de9d8f25440344d9c437cbd70480f32dbe4dae8f5ba81fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:25 addons-376000 dockerd[1231]: time="2024-08-28T17:04:25.469267916Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:04:25 addons-376000 dockerd[1231]: time="2024-08-28T17:04:25.471744767Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Aug 28 17:04:29 addons-376000 dockerd[1231]: time="2024-08-28T17:04:29.720829673Z" level=info msg="ignoring event" container=f50d7a4cd9ccc15ff084707a6eaa270568ef66212410332de3709019e119cc09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:29 addons-376000 dockerd[1231]: time="2024-08-28T17:04:29.721952163Z" level=info msg="ignoring event" container=24edacfccf3ba6033d05992f005113fdf1a2837e6d9127de2e1919fd35c99269 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:29 addons-376000 dockerd[1231]: time="2024-08-28T17:04:29.845421287Z" level=info msg="ignoring event" container=eee32fe4e91fa5ccf788dfee9bbbd3a2c5a50e3bf92ec398f3fe43e3714827f3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:29 addons-376000 dockerd[1231]: time="2024-08-28T17:04:29.920260672Z" level=info msg="ignoring event" container=c70fd3db890711b077eae5af5a510339a538ba0a9da8282ec5f56955d6865628 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:30 addons-376000 cri-dockerd[1501]: time="2024-08-28T17:04:30Z" level=error msg="error getting RW layer size for container ID 'b38a64cb1410f9d9a2f79f21c5eeaf817971aa251efd0e25d0de5f4d5ff12c04': Error response from daemon: No such container: b38a64cb1410f9d9a2f79f21c5eeaf817971aa251efd0e25d0de5f4d5ff12c04"
	Aug 28 17:04:30 addons-376000 cri-dockerd[1501]: time="2024-08-28T17:04:30Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b38a64cb1410f9d9a2f79f21c5eeaf817971aa251efd0e25d0de5f4d5ff12c04'"
	Aug 28 17:04:35 addons-376000 dockerd[1231]: time="2024-08-28T17:04:35.466728860Z" level=info msg="ignoring event" container=5156bfe581bb46645a254b76c5b6e5b3db6c4643ad9cf343b48da2269dc75110 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:35 addons-376000 cri-dockerd[1501]: time="2024-08-28T17:04:35Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"yakd-dashboard-67d98fc6b-j7lpb_yakd-dashboard\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Aug 28 17:04:35 addons-376000 dockerd[1231]: time="2024-08-28T17:04:35.592833974Z" level=info msg="ignoring event" container=328ff881291737716de326f95abec4a0ca33a541f084cdea9723e9f42f558aeb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 28 17:04:43 addons-376000 dockerd[1231]: time="2024-08-28T17:04:43.287048283Z" level=info msg="ignoring event" container=34ee2e3b7a6890479a92677a1ca9c6c155af95e33e6f104cbee638611f7ad801 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	0f0c0e98d4de6       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc            43 seconds ago      Exited              gadget                     7                   55b5d1de9c619       gadget-g57hh
	610a5379e1cd3       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                   0                   4f02043766b1f       gcp-auth-89d5ffd79-vhbqx
	2bd3e051fd3e3       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   b9071dd81b18a       ingress-nginx-controller-bc57996ff-gs9nv
	d1867c9103572       ce263a8653f9c                                                                                                                11 minutes ago      Exited              patch                      1                   bbf874636ff35       ingress-nginx-admission-patch-5ns7k
	0716151551ec0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   f7c6119ea45b7       ingress-nginx-admission-create-rrcxd
	338d89e8fb4b8       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Running             registry-proxy             0                   c884d3ae610c4       registry-proxy-mbq7t
	2769567b660f9       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server             0                   ab0442f7ba724       metrics-server-84c5f94fbc-nvqvh
	2a655a02fb911       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner     0                   7146ec6d82eac       local-path-provisioner-86d989889c-ks6cv
	f29b40d0d0eb7       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                             12 minutes ago      Running             registry                   0                   5fd192f4a37c5       registry-6fb4cdfc84-hqg5k
	422d223b6098f       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                  12 minutes ago      Running             tiller                     0                   46c13f6b31058       tiller-deploy-b48cc5f79-sh2dj
	b0915e21a7c43       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   3f6fa0c7bf5eb       kube-ingress-dns-minikube
	4e4725e420873       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   69802d17a6c53       cloud-spanner-emulator-769b77f747-zbg4x
	f4685e3c69349       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   31933b713c7cc       nvidia-device-plugin-daemonset-4pp2v
	326bdcb3e331a       6e38f40d628db                                                                                                                12 minutes ago      Running             storage-provisioner        0                   6ee53072c0e13       storage-provisioner
	76011324676ec       cbb01a7bd410d                                                                                                                12 minutes ago      Running             coredns                    0                   f8cb8307f6c8f       coredns-6f6b679f8f-kllsn
	76075dfe63154       ad83b2ca7b09e                                                                                                                12 minutes ago      Running             kube-proxy                 0                   992415a41508c       kube-proxy-8bjfx
	9a6703d59f88a       1766f54c897f0                                                                                                                13 minutes ago      Running             kube-scheduler             0                   e470b490c2365       kube-scheduler-addons-376000
	7962f017d7fed       045733566833c                                                                                                                13 minutes ago      Running             kube-controller-manager    0                   5843231035dd4       kube-controller-manager-addons-376000
	255868fd7b118       2e96e5913fc06                                                                                                                13 minutes ago      Running             etcd                       0                   708304b3c59ab       etcd-addons-376000
	c81f7e2c13f04       604f5db92eaa8                                                                                                                13 minutes ago      Running             kube-apiserver             0                   29a21428032aa       kube-apiserver-addons-376000
	
	
	==> controller_ingress [2bd3e051fd3e] <==
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	W0828 16:53:26.486140       7 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0828 16:53:26.486290       7 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0828 16:53:26.490756       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.0" state="clean" commit="9edcffcde5595e8a5b1a35f88c421764e575afce" platform="linux/amd64"
	I0828 16:53:26.632855       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0828 16:53:26.652113       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0828 16:53:26.659107       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0828 16:53:26.672777       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"19a1543a-1af3-4c4a-90c0-168efe577aa1", APIVersion:"v1", ResourceVersion:"704", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0828 16:53:26.680080       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"ebaf5be2-745b-4fca-afc9-acc0f0dfd64f", APIVersion:"v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0828 16:53:26.680264       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"e4a80ac2-b906-4ea7-9f32-3a6f0d1e87c6", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0828 16:53:27.862020       7 nginx.go:317] "Starting NGINX process"
	I0828 16:53:27.862646       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0828 16:53:27.862744       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0828 16:53:27.862991       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0828 16:53:27.868088       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0828 16:53:27.868158       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-gs9nv"
	I0828 16:53:27.871225       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-gs9nv" node="addons-376000"
	I0828 16:53:27.888344       7 controller.go:213] "Backend successfully reloaded"
	I0828 16:53:27.888445       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0828 16:53:27.888538       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-gs9nv", UID:"51bd5c48-e527-45b9-b36c-76c7a557da9c", APIVersion:"v1", ResourceVersion:"752", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [76011324676e] <==
	[INFO] 127.0.0.1:55149 - 17793 "HINFO IN 1253128157047655052.3218655966490428594. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.087014613s
	[INFO] 10.244.0.8:36927 - 61281 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000364539s
	[INFO] 10.244.0.8:36927 - 21094 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000688549s
	[INFO] 10.244.0.8:58291 - 17718 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000150873s
	[INFO] 10.244.0.8:58291 - 21044 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00028958s
	[INFO] 10.244.0.8:33185 - 310 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000253639s
	[INFO] 10.244.0.8:33185 - 8496 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000356241s
	[INFO] 10.244.0.8:57155 - 62776 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091641s
	[INFO] 10.244.0.8:57155 - 46393 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087214s
	[INFO] 10.244.0.8:51776 - 30998 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000120295s
	[INFO] 10.244.0.8:51776 - 1552 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000228745s
	[INFO] 10.244.0.8:33003 - 53719 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000121454s
	[INFO] 10.244.0.8:33003 - 63185 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000272986s
	[INFO] 10.244.0.8:34829 - 32280 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000083017s
	[INFO] 10.244.0.8:34829 - 2074 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000156052s
	[INFO] 10.244.0.8:45452 - 43307 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069853s
	[INFO] 10.244.0.8:45452 - 16168 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000239045s
	[INFO] 10.244.0.26:53748 - 25486 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000250778s
	[INFO] 10.244.0.26:36619 - 33877 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00020738s
	[INFO] 10.244.0.26:60220 - 35204 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000166838s
	[INFO] 10.244.0.26:37613 - 43731 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00062574s
	[INFO] 10.244.0.26:34740 - 39551 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000125883s
	[INFO] 10.244.0.26:58126 - 59948 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000104716s
	[INFO] 10.244.0.26:38864 - 19768 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.003334118s
	[INFO] 10.244.0.26:43902 - 43610 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.004109054s
	
	
	==> describe nodes <==
	Name:               addons-376000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-376000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f256f0bf490fd67de29a75a245d072e85b1b216
	                    minikube.k8s.io/name=addons-376000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_28T09_51_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-376000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 28 Aug 2024 16:51:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-376000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 28 Aug 2024 17:04:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 28 Aug 2024 17:00:29 +0000   Wed, 28 Aug 2024 16:51:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 28 Aug 2024 17:00:29 +0000   Wed, 28 Aug 2024 16:51:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 28 Aug 2024 17:00:29 +0000   Wed, 28 Aug 2024 16:51:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 28 Aug 2024 17:00:29 +0000   Wed, 28 Aug 2024 16:51:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-376000
	Capacity:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             16375060Ki
	  pods:               110
	Allocatable:
	  cpu:                12
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             16375060Ki
	  pods:               110
	System Info:
	  Machine ID:                 61e1b25cb6d34b228e3a1970afc7633b
	  System UUID:                61e1b25cb6d34b228e3a1970afc7633b
	  Boot ID:                    eabda6d5-913a-4616-acb2-93e3918f46d2
	  Kernel Version:             6.6.32-linuxkit
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m16s
	  default                     cloud-spanner-emulator-769b77f747-zbg4x     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-g57hh                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-vhbqx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-gs9nv    100m (0%)     0 (0%)      90Mi (0%)        0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-kllsn                    100m (0%)     0 (0%)      70Mi (0%)        170Mi (1%)     12m
	  kube-system                 etcd-addons-376000                          100m (0%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-376000                250m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-376000       200m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-8bjfx                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-376000                100m (0%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-nvqvh             100m (0%)     0 (0%)      200Mi (1%)       0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-4pp2v        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-6fb4cdfc84-hqg5k                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 registry-proxy-mbq7t                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 tiller-deploy-b48cc5f79-sh2dj               0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-ks6cv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (7%)   0 (0%)
	  memory             460Mi (2%)  170Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node addons-376000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node addons-376000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node addons-376000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node addons-376000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node addons-376000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node addons-376000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12m                node-controller  Node addons-376000 event: Registered Node addons-376000 in Controller
	
	
	==> dmesg <==
	[  +0.000001] virtio-pci 0000:00:06.0: PCI INT A: no GSI
	[  +0.003829] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:07.0: PCI INT A: no GSI
	[  +0.003244] virtio-pci 0000:00:08.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:08.0: PCI INT A: no GSI
	[  +0.006114] virtio-pci 0000:00:09.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:09.0: PCI INT A: no GSI
	[  +0.006130] virtio-pci 0000:00:0a.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:0a.0: PCI INT A: no GSI
	[  +0.006033] virtio-pci 0000:00:0b.0: can't derive routing for PCI INT A
	[  +0.000003] virtio-pci 0000:00:0b.0: PCI INT A: no GSI
	[  +0.005763] virtio-pci 0000:00:0c.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:0c.0: PCI INT A: no GSI
	[  +0.006199] virtio-pci 0000:00:0d.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:0d.0: PCI INT A: no GSI
	[  +0.003751] virtio-pci 0000:00:0e.0: can't derive routing for PCI INT A
	[  +0.000001] virtio-pci 0000:00:0e.0: PCI INT A: no GSI
	[  +0.003984] virtio-pci 0000:00:0f.0: can't derive routing for PCI INT A
	[  +0.000002] virtio-pci 0000:00:0f.0: PCI INT A: no GSI
	[  +0.010280] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
	[  +0.038633] lpc_ich 0000:00:1f.0: No MFD cells added
	[  +0.316135] netlink: 'init': attribute type 4 has an invalid length.
	[  +0.087189] fakeowner: loading out-of-tree module taints kernel.
	[  +0.293125] netlink: 'init': attribute type 22 has an invalid length.
	[Aug28 16:51] systemd[1182]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	
	
	==> etcd [255868fd7b11] <==
	{"level":"info","ts":"2024-08-28T16:52:08.883587Z","caller":"traceutil/trace.go:171","msg":"trace[835165422] transaction","detail":"{read_only:false; response_revision:898; number_of_response:1; }","duration":"113.282748ms","start":"2024-08-28T16:52:08.770292Z","end":"2024-08-28T16:52:08.883575Z","steps":["trace[835165422] 'process raft request'  (duration: 92.993821ms)","trace[835165422] 'compare'  (duration: 20.175261ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T16:52:09.739643Z","caller":"traceutil/trace.go:171","msg":"trace[1481102744] linearizableReadLoop","detail":"{readStateIndex:970; appliedIndex:967; }","duration":"123.909688ms","start":"2024-08-28T16:52:09.615722Z","end":"2024-08-28T16:52:09.739632Z","steps":["trace[1481102744] 'read index received'  (duration: 28.27918ms)","trace[1481102744] 'applied index is now lower than readState.Index'  (duration: 95.630128ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T16:52:09.739803Z","caller":"traceutil/trace.go:171","msg":"trace[1495062145] transaction","detail":"{read_only:false; response_revision:948; number_of_response:1; }","duration":"170.247475ms","start":"2024-08-28T16:52:09.569544Z","end":"2024-08-28T16:52:09.739791Z","steps":["trace[1495062145] 'process raft request'  (duration: 169.937324ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:52:09.740138Z","caller":"traceutil/trace.go:171","msg":"trace[1178695902] transaction","detail":"{read_only:false; response_revision:949; number_of_response:1; }","duration":"169.406711ms","start":"2024-08-28T16:52:09.570719Z","end":"2024-08-28T16:52:09.740126Z","steps":["trace[1178695902] 'process raft request'  (duration: 168.867406ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:52:09.740158Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.418665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:52:09.740971Z","caller":"traceutil/trace.go:171","msg":"trace[1356473743] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:950; }","duration":"125.226562ms","start":"2024-08-28T16:52:09.615720Z","end":"2024-08-28T16:52:09.740946Z","steps":["trace[1356473743] 'agreement among raft nodes before linearized reading'  (duration: 124.404764ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:52:15.657415Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.526104ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:52:15.657474Z","caller":"traceutil/trace.go:171","msg":"trace[715893867] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:999; }","duration":"150.592857ms","start":"2024-08-28T16:52:15.506871Z","end":"2024-08-28T16:52:15.657464Z","steps":["trace[715893867] 'range keys from in-memory index tree'  (duration: 150.516845ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:52:15.657784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.64045ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:52:15.657851Z","caller":"traceutil/trace.go:171","msg":"trace[579862356] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:999; }","duration":"119.708186ms","start":"2024-08-28T16:52:15.538134Z","end":"2024-08-28T16:52:15.657843Z","steps":["trace[579862356] 'range keys from in-memory index tree'  (duration: 119.605394ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:52:33.154596Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.669281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:52:33.154667Z","caller":"traceutil/trace.go:171","msg":"trace[1485000469] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1041; }","duration":"115.850709ms","start":"2024-08-28T16:52:33.038806Z","end":"2024-08-28T16:52:33.154657Z","steps":["trace[1485000469] 'range keys from in-memory index tree'  (duration: 115.632162ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:52:45.644839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.586979ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:52:45.644904Z","caller":"traceutil/trace.go:171","msg":"trace[468972434] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1098; }","duration":"106.685184ms","start":"2024-08-28T16:52:45.538210Z","end":"2024-08-28T16:52:45.644895Z","steps":["trace[468972434] 'range keys from in-memory index tree'  (duration: 106.524073ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:52:58.213261Z","caller":"traceutil/trace.go:171","msg":"trace[1161366995] linearizableReadLoop","detail":"{readStateIndex:1157; appliedIndex:1156; }","duration":"176.774132ms","start":"2024-08-28T16:52:58.036477Z","end":"2024-08-28T16:52:58.213251Z","steps":["trace[1161366995] 'read index received'  (duration: 174.797778ms)","trace[1161366995] 'applied index is now lower than readState.Index'  (duration: 1.975937ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T16:52:58.213297Z","caller":"traceutil/trace.go:171","msg":"trace[1991279649] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"191.656135ms","start":"2024-08-28T16:52:58.021633Z","end":"2024-08-28T16:52:58.213290Z","steps":["trace[1991279649] 'process raft request'  (duration: 189.620733ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:52:58.213349Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.862822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-08-28T16:52:58.213365Z","caller":"traceutil/trace.go:171","msg":"trace[557895269] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1126; }","duration":"176.886969ms","start":"2024-08-28T16:52:58.036473Z","end":"2024-08-28T16:52:58.213360Z","steps":["trace[557895269] 'agreement among raft nodes before linearized reading'  (duration: 176.850796ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:52:58.214466Z","caller":"traceutil/trace.go:171","msg":"trace[1421216910] transaction","detail":"{read_only:false; response_revision:1127; number_of_response:1; }","duration":"108.11789ms","start":"2024-08-28T16:52:58.106339Z","end":"2024-08-28T16:52:58.214456Z","steps":["trace[1421216910] 'process raft request'  (duration: 107.972976ms)"],"step_count":1}
	{"level":"warn","ts":"2024-08-28T16:52:58.214468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.843445ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.49.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-08-28T16:52:58.214603Z","caller":"traceutil/trace.go:171","msg":"trace[892549834] range","detail":"{range_begin:/registry/masterleases/192.168.49.2; range_end:; response_count:1; response_revision:1127; }","duration":"144.984104ms","start":"2024-08-28T16:52:58.069612Z","end":"2024-08-28T16:52:58.214596Z","steps":["trace[892549834] 'agreement among raft nodes before linearized reading'  (duration: 144.793484ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-28T16:54:45.176462Z","caller":"traceutil/trace.go:171","msg":"trace[126617629] transaction","detail":"{read_only:false; response_revision:1515; number_of_response:1; }","duration":"125.481011ms","start":"2024-08-28T16:54:45.050966Z","end":"2024-08-28T16:54:45.176447Z","steps":["trace[126617629] 'process raft request'  (duration: 88.496253ms)","trace[126617629] 'compare'  (duration: 36.877062ms)"],"step_count":2}
	{"level":"info","ts":"2024-08-28T17:01:45.394121Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1893}
	{"level":"info","ts":"2024-08-28T17:01:45.466227Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1893,"took":"71.729785ms","hash":1676900674,"current-db-size-bytes":9064448,"current-db-size":"9.1 MB","current-db-size-in-use-bytes":4972544,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-08-28T17:01:45.466278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1676900674,"revision":1893,"compact-revision":-1}
	
	
	==> gcp-auth [610a5379e1cd] <==
	2024/08/28 16:54:44 GCP Auth Webhook started!
	2024/08/28 16:55:01 Ready to marshal response ...
	2024/08/28 16:55:01 Ready to write response ...
	2024/08/28 16:55:01 Ready to marshal response ...
	2024/08/28 16:55:01 Ready to write response ...
	2024/08/28 16:55:28 Ready to marshal response ...
	2024/08/28 16:55:28 Ready to write response ...
	2024/08/28 16:55:28 Ready to marshal response ...
	2024/08/28 16:55:28 Ready to write response ...
	2024/08/28 16:55:28 Ready to marshal response ...
	2024/08/28 16:55:28 Ready to write response ...
	2024/08/28 17:03:43 Ready to marshal response ...
	2024/08/28 17:03:43 Ready to write response ...
	2024/08/28 17:03:47 Ready to marshal response ...
	2024/08/28 17:03:47 Ready to write response ...
	2024/08/28 17:04:13 Ready to marshal response ...
	2024/08/28 17:04:13 Ready to write response ...
	
	
	==> kernel <==
	 17:04:45 up 33 min,  0 users,  load average: 2.95, 3.22, 2.45
	Linux addons-376000 6.6.32-linuxkit #1 SMP PREEMPT_DYNAMIC Thu Jun 13 14:14:43 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [c81f7e2c13f0] <==
	I0828 16:55:18.774708       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	W0828 16:55:19.258832       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	I0828 16:55:19.363384       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0828 16:55:19.462712       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0828 16:55:19.561628       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0828 16:55:19.775408       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0828 16:55:19.775451       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0828 16:55:19.961308       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0828 16:55:20.275550       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0828 16:55:20.563120       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0828 16:55:20.818986       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0828 17:03:55.382954       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0828 17:04:29.603879       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:04:29.603934       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:04:29.613633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:04:29.613694       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:04:29.623449       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:04:29.623504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:04:29.630925       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:04:29.630976       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0828 17:04:29.640138       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0828 17:04:29.640186       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0828 17:04:30.631196       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0828 17:04:30.640597       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0828 17:04:30.724686       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [7962f017d7fe] <==
	E0828 17:04:30.950614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:31.647219       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:31.647344       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:31.835736       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:31.835810       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:31.960629       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:31.960679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:33.447680       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:33.447771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:34.763573       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:34.763678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:34.848625       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:34.848687       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:04:35.436208       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="2.923µs"
	W0828 17:04:37.096609       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:37.096695       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:38.796842       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:38.796890       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:38.914522       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:38.914615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:39.338158       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:39.338220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0828 17:04:39.757405       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0828 17:04:39.757455       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0828 17:04:45.507508       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	
	
	==> kube-proxy [76075dfe6315] <==
	I0828 16:51:59.474212       1 server_linux.go:66] "Using iptables proxy"
	I0828 16:52:00.064342       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0828 16:52:00.064767       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0828 16:52:00.376530       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0828 16:52:00.376594       1 server_linux.go:169] "Using iptables Proxier"
	I0828 16:52:00.380264       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0828 16:52:00.380602       1 server.go:483] "Version info" version="v1.31.0"
	I0828 16:52:00.380625       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0828 16:52:00.382652       1 config.go:197] "Starting service config controller"
	I0828 16:52:00.382728       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0828 16:52:00.382838       1 config.go:104] "Starting endpoint slice config controller"
	I0828 16:52:00.382842       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0828 16:52:00.383343       1 config.go:326] "Starting node config controller"
	I0828 16:52:00.383360       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0828 16:52:00.563802       1 shared_informer.go:320] Caches are synced for node config
	I0828 16:52:00.563853       1 shared_informer.go:320] Caches are synced for service config
	I0828 16:52:00.563892       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9a6703d59f88] <==
	E0828 16:51:46.693960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0828 16:51:46.693964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:46.692564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:46.693982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:46.692786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0828 16:51:46.693999       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:46.692936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0828 16:51:46.694039       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:46.693117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:46.694055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:47.565772       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:47.565858       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:47.598627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0828 16:51:47.598688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:47.623180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0828 16:51:47.623229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:47.712513       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0828 16:51:47.712566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:47.726175       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:47.726280       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0828 16:51:47.800757       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0828 16:51:47.800805       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0828 16:51:47.819346       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0828 16:51:47.819398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0828 16:51:50.589094       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 28 17:04:30 addons-376000 kubelet[2360]: I0828 17:04:30.467168    2360 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"f50d7a4cd9ccc15ff084707a6eaa270568ef66212410332de3709019e119cc09"} err="failed to get container status \"f50d7a4cd9ccc15ff084707a6eaa270568ef66212410332de3709019e119cc09\": rpc error: code = Unknown desc = Error response from daemon: No such container: f50d7a4cd9ccc15ff084707a6eaa270568ef66212410332de3709019e119cc09"
	Aug 28 17:04:30 addons-376000 kubelet[2360]: I0828 17:04:30.467185    2360 scope.go:117] "RemoveContainer" containerID="24edacfccf3ba6033d05992f005113fdf1a2837e6d9127de2e1919fd35c99269"
	Aug 28 17:04:30 addons-376000 kubelet[2360]: I0828 17:04:30.477738    2360 scope.go:117] "RemoveContainer" containerID="24edacfccf3ba6033d05992f005113fdf1a2837e6d9127de2e1919fd35c99269"
	Aug 28 17:04:30 addons-376000 kubelet[2360]: E0828 17:04:30.478465    2360 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 24edacfccf3ba6033d05992f005113fdf1a2837e6d9127de2e1919fd35c99269" containerID="24edacfccf3ba6033d05992f005113fdf1a2837e6d9127de2e1919fd35c99269"
	Aug 28 17:04:30 addons-376000 kubelet[2360]: I0828 17:04:30.478519    2360 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"24edacfccf3ba6033d05992f005113fdf1a2837e6d9127de2e1919fd35c99269"} err="failed to get container status \"24edacfccf3ba6033d05992f005113fdf1a2837e6d9127de2e1919fd35c99269\": rpc error: code = Unknown desc = Error response from daemon: No such container: 24edacfccf3ba6033d05992f005113fdf1a2837e6d9127de2e1919fd35c99269"
	Aug 28 17:04:31 addons-376000 kubelet[2360]: I0828 17:04:31.021300    2360 scope.go:117] "RemoveContainer" containerID="0f0c0e98d4de62ee810dc85d6505d7c5d7a9449645f5b67ac175bb95990b822f"
	Aug 28 17:04:31 addons-376000 kubelet[2360]: E0828 17:04:31.021530    2360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-g57hh_gadget(492bce28-3ac3-422a-9b8d-6fc38b28ec30)\"" pod="gadget/gadget-g57hh" podUID="492bce28-3ac3-422a-9b8d-6fc38b28ec30"
	Aug 28 17:04:31 addons-376000 kubelet[2360]: I0828 17:04:31.029484    2360 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c675c1-2ba5-47b4-bbc4-f4a67107fd97" path="/var/lib/kubelet/pods/96c675c1-2ba5-47b4-bbc4-f4a67107fd97/volumes"
	Aug 28 17:04:31 addons-376000 kubelet[2360]: I0828 17:04:31.029769    2360 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfdfd478-64fa-44b9-8cc5-a761f47d7f31" path="/var/lib/kubelet/pods/cfdfd478-64fa-44b9-8cc5-a761f47d7f31/volumes"
	Aug 28 17:04:34 addons-376000 kubelet[2360]: E0828 17:04:34.023514    2360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="849a409e-3516-41c7-8f16-47a1fbf9615e"
	Aug 28 17:04:35 addons-376000 kubelet[2360]: I0828 17:04:35.782004    2360 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hfmz\" (UniqueName: \"kubernetes.io/projected/84831e1d-7fe1-4ba9-97e4-5e2330be6308-kube-api-access-8hfmz\") pod \"84831e1d-7fe1-4ba9-97e4-5e2330be6308\" (UID: \"84831e1d-7fe1-4ba9-97e4-5e2330be6308\") "
	Aug 28 17:04:35 addons-376000 kubelet[2360]: I0828 17:04:35.785028    2360 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84831e1d-7fe1-4ba9-97e4-5e2330be6308-kube-api-access-8hfmz" (OuterVolumeSpecName: "kube-api-access-8hfmz") pod "84831e1d-7fe1-4ba9-97e4-5e2330be6308" (UID: "84831e1d-7fe1-4ba9-97e4-5e2330be6308"). InnerVolumeSpecName "kube-api-access-8hfmz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:04:35 addons-376000 kubelet[2360]: I0828 17:04:35.883268    2360 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8hfmz\" (UniqueName: \"kubernetes.io/projected/84831e1d-7fe1-4ba9-97e4-5e2330be6308-kube-api-access-8hfmz\") on node \"addons-376000\" DevicePath \"\""
	Aug 28 17:04:36 addons-376000 kubelet[2360]: I0828 17:04:36.600451    2360 scope.go:117] "RemoveContainer" containerID="5156bfe581bb46645a254b76c5b6e5b3db6c4643ad9cf343b48da2269dc75110"
	Aug 28 17:04:37 addons-376000 kubelet[2360]: E0828 17:04:37.022882    2360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="59a886fd-e1a7-46a9-9bff-0b1fa3137ad6"
	Aug 28 17:04:37 addons-376000 kubelet[2360]: I0828 17:04:37.031961    2360 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84831e1d-7fe1-4ba9-97e4-5e2330be6308" path="/var/lib/kubelet/pods/84831e1d-7fe1-4ba9-97e4-5e2330be6308/volumes"
	Aug 28 17:04:43 addons-376000 kubelet[2360]: I0828 17:04:43.021206    2360 scope.go:117] "RemoveContainer" containerID="0f0c0e98d4de62ee810dc85d6505d7c5d7a9449645f5b67ac175bb95990b822f"
	Aug 28 17:04:43 addons-376000 kubelet[2360]: E0828 17:04:43.021468    2360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-g57hh_gadget(492bce28-3ac3-422a-9b8d-6fc38b28ec30)\"" pod="gadget/gadget-g57hh" podUID="492bce28-3ac3-422a-9b8d-6fc38b28ec30"
	Aug 28 17:04:43 addons-376000 kubelet[2360]: I0828 17:04:43.485992    2360 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zqvt\" (UniqueName: \"kubernetes.io/projected/59a886fd-e1a7-46a9-9bff-0b1fa3137ad6-kube-api-access-2zqvt\") pod \"59a886fd-e1a7-46a9-9bff-0b1fa3137ad6\" (UID: \"59a886fd-e1a7-46a9-9bff-0b1fa3137ad6\") "
	Aug 28 17:04:43 addons-376000 kubelet[2360]: I0828 17:04:43.486028    2360 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/59a886fd-e1a7-46a9-9bff-0b1fa3137ad6-gcp-creds\") pod \"59a886fd-e1a7-46a9-9bff-0b1fa3137ad6\" (UID: \"59a886fd-e1a7-46a9-9bff-0b1fa3137ad6\") "
	Aug 28 17:04:43 addons-376000 kubelet[2360]: I0828 17:04:43.486090    2360 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a886fd-e1a7-46a9-9bff-0b1fa3137ad6-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "59a886fd-e1a7-46a9-9bff-0b1fa3137ad6" (UID: "59a886fd-e1a7-46a9-9bff-0b1fa3137ad6"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 28 17:04:43 addons-376000 kubelet[2360]: I0828 17:04:43.487987    2360 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a886fd-e1a7-46a9-9bff-0b1fa3137ad6-kube-api-access-2zqvt" (OuterVolumeSpecName: "kube-api-access-2zqvt") pod "59a886fd-e1a7-46a9-9bff-0b1fa3137ad6" (UID: "59a886fd-e1a7-46a9-9bff-0b1fa3137ad6"). InnerVolumeSpecName "kube-api-access-2zqvt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 28 17:04:43 addons-376000 kubelet[2360]: I0828 17:04:43.587691    2360 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/59a886fd-e1a7-46a9-9bff-0b1fa3137ad6-gcp-creds\") on node \"addons-376000\" DevicePath \"\""
	Aug 28 17:04:43 addons-376000 kubelet[2360]: I0828 17:04:43.587755    2360 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2zqvt\" (UniqueName: \"kubernetes.io/projected/59a886fd-e1a7-46a9-9bff-0b1fa3137ad6-kube-api-access-2zqvt\") on node \"addons-376000\" DevicePath \"\""
	Aug 28 17:04:45 addons-376000 kubelet[2360]: I0828 17:04:45.028833    2360 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a886fd-e1a7-46a9-9bff-0b1fa3137ad6" path="/var/lib/kubelet/pods/59a886fd-e1a7-46a9-9bff-0b1fa3137ad6/volumes"
	
	
	==> storage-provisioner [326bdcb3e331] <==
	I0828 16:52:03.080829       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0828 16:52:03.175254       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0828 16:52:03.175498       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0828 16:52:03.275803       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0828 16:52:03.275948       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-376000_ec79b3ae-a628-4375-8283-2586aebd0c14!
	I0828 16:52:03.277632       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d8a1d9a-1996-4dee-9d5d-3d609e31aae8", APIVersion:"v1", ResourceVersion:"661", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-376000_ec79b3ae-a628-4375-8283-2586aebd0c14 became leader
	I0828 16:52:03.376163       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-376000_ec79b3ae-a628-4375-8283-2586aebd0c14!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p addons-376000 -n addons-376000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-376000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-rrcxd ingress-nginx-admission-patch-5ns7k
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-376000 describe pod busybox ingress-nginx-admission-create-rrcxd ingress-nginx-admission-patch-5ns7k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-376000 describe pod busybox ingress-nginx-admission-create-rrcxd ingress-nginx-admission-patch-5ns7k: exit status 1 (59.866884ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-376000/192.168.49.2
	Start Time:       Wed, 28 Aug 2024 09:55:28 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.28
	IPs:
	  IP:  10.244.0.28
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtnrb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wtnrb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m18s                   default-scheduler  Successfully assigned default/busybox to addons-376000
	  Normal   Pulling    7m49s (x4 over 9m18s)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m48s (x4 over 9m17s)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m48s (x4 over 9m17s)   kubelet            Error: ErrImagePull
	  Warning  Failed     7m23s (x6 over 9m17s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m14s (x19 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rrcxd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5ns7k" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-376000 describe pod busybox ingress-nginx-admission-create-rrcxd ingress-nginx-admission-patch-5ns7k: exit status 1
--- FAIL: TestAddons/parallel/Registry (74.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7201.807s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-129000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
E0828 10:24:45.245073    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:26:08.336224    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:28:43.321061    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:29:45.349305    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:30:06.386588    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:33:43.317801    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:34:45.344361    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:38:43.312791    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-129000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : signal: killed (15m0.004106349s)

                                                
                                                
-- stdout --
	* [mount-start-1-129000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1451/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1451/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting minikube without Kubernetes in cluster mount-start-1-129000
	* Pulling base image v0.0.44-1724775115-19521 ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...

                                                
                                                
-- /stdout --
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-129000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMountStart/serial/StartWithMountFirst]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect mount-start-1-129000
E0828 10:39:45.341866    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:42:48.434481    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:43:43.311647    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:44:45.338221    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:46:46.379701    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:48:43.306457    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:49:45.335224    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:53:43.303765    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:54:45.331554    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:58:43.449025    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:59:28.576525    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:59:45.478969    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:03:26.527184    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:03:43.451903    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:04:45.480253    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:08:43.454962    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:09:45.482877    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:13:43.456529    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:14:45.486013    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:16:08.587377    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:18:43.458147    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:19:45.487244    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:20:06.537888    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:23:43.461448    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:24:45.489027    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:28:43.461568    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:29:45.574319    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:32:48.677051    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:33:43.545595    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:34:45.574377    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:36:46.626418    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:38:43.543461    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:39:45.573958    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:43:43.542824    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:44:45.572610    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:48:43.542077    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:49:28.677122    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 11:49:45.571834    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
panic: test timed out after 2h0m0s
running tests:
	TestMountStart (1h25m59s)
	TestMountStart/serial (1h25m59s)
	TestMountStart/serial/StartWithMountFirst (1h25m59s)

                                                
                                                
goroutine 2412 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 87 minutes]:
testing.(*T).Run(0xc0007fe1a0, {0xaa51aac?, 0x8c9503801297a88?}, 0xc19ad80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
testing.runTests.func1(0xc0007fe1a0)
	/usr/local/go/src/testing/testing.go:2161 +0x37
testing.tRunner(0xc0007fe1a0, 0xc001297bb0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
testing.runTests(0xc00081e300, {0xd7380c0, 0x2a, 0x2a}, {0x8d726c5?, 0xaa97f68?, 0xd75b760?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000850b40)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000850b40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 9 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00081dc80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1122 [chan receive, 105 minutes]:
testing.(*T).Parallel(0xc001f344e0)
	/usr/local/go/src/testing/testing.go:1483 +0x215
k8s.io/minikube/test/integration.MaybeParallel(0xc001f344e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc001f344e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc001f344e0, 0xc19acb8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 128 [chan receive, 119 minutes]:
testing.(*T).Parallel(0xc0007ff040)
	/usr/local/go/src/testing/testing.go:1483 +0x215
k8s.io/minikube/test/integration.MaybeParallel(0xc0007ff040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestOffline(0xc0007ff040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/aab_offline_test.go:32 +0x39
testing.tRunner(0xc0007ff040, 0xc19ada8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 69 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 68
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 1130 [chan receive, 105 minutes]:
testing.(*T).Parallel(0xc001f351e0)
	/usr/local/go/src/testing/testing.go:1483 +0x215
k8s.io/minikube/test/integration.MaybeParallel(0xc001f351e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperkitDriverSkipUpgrade(0xc001f351e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:172 +0x2a
testing.tRunner(0xc001f351e0, 0xc19ad18)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 138 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xc1c5a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 137
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 151 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xc1cf350, 0xc000522ae0}, 0xc0013a9750, 0xc00136ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xc1cf350, 0xc000522ae0}, 0x58?, 0xc0013a9750, 0xc0013a9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xc1cf350?, 0xc000522ae0?}, 0xc1a9c20?, 0xc000818040?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013a97d0?, 0x8e2c844?, 0xc0007670e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 139 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0006fb040, 0xc000522ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 137
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2411 [IO wait, 71 minutes]:
internal/poll.runtime_pollWait(0x55159218, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0015462a0?, 0xc00094a400?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0015462a0, {0xc00094a400, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002054040, {0xc00094a400?, 0xc0015ac700?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ae0300, {0xc1a79e8, 0xc0007f0110})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xc1a7b28, 0xc001ae0300}, {0xc1a79e8, 0xc0007f0110}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001915e78?, {0xc1a7b28, 0xc001ae0300})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001915f38?, {0xc1a7b28?, 0xc001ae0300?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xc1a7b28, 0xc001ae0300}, {0xc1a7aa8, 0xc002054040}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001adcba0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2417
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2410 [IO wait, 71 minutes]:
internal/poll.runtime_pollWait(0x55159028, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0015461e0?, 0xc00094a200?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc0015461e0, {0xc00094a200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc002054028, {0xc00094a200?, 0x8cb30c5?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001ae02d0, {0xc1a79e8, 0xc0007f00f0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xc1a7b28, 0xc001ae02d0}, {0xc1a79e8, 0xc0007f00f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xaa4aa67?, {0xc1a7b28, 0xc001ae02d0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xaa665c0?, {0xc1a7b28?, 0xc001ae02d0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xc1a7b28, 0xc001ae02d0}, {0xc1a7aa8, 0xc002054028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0012a6600?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2417
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 1124 [chan receive, 105 minutes]:
testing.(*T).Parallel(0xc001f34820)
	/usr/local/go/src/testing/testing.go:1483 +0x215
k8s.io/minikube/test/integration.MaybeParallel(0xc001f34820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestDockerFlags(0xc001f34820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:43 +0x105
testing.tRunner(0xc001f34820, 0xc19acc8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 150 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0006faf50, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001423d80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xc1e93a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0006fb040)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00049a000, {0xc1a9020, 0xc0007661b0}, 0x1, 0xc000522ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00049a000, 0x3b9aca00, 0x0, 0x1, 0xc000522ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 139
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 152 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 151
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2399 [chan receive, 87 minutes]:
testing.(*T).Run(0xc0007fe9c0, {0xaa3eef2?, 0xd18c2e2800?}, 0xc0012a6600)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestMountStart(0xc0007fe9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/mount_start_test.go:57 +0x24c
testing.tRunner(0xc0007fe9c0, 0xc19ad80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2346 [syscall, 89 minutes]:
syscall.syscall(0x0?, 0xc001d492c0?, 0xc0001106f0?, 0x8d52c5d?)
	/usr/local/go/src/runtime/sys_darwin.go:23 +0x70
syscall.Flock(0xc001d4ce70?, 0x1?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:682 +0x29
github.com/juju/mutex/v2.acquireFlock.func3()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:114 +0x34
github.com/juju/mutex/v2.acquireFlock.func4()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:121 +0x58
github.com/juju/mutex/v2.acquireFlock.func5()
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:151 +0x22
created by github.com/juju/mutex/v2.acquireFlock in goroutine 2336
	/var/lib/jenkins/go/pkg/mod/github.com/juju/mutex/v2@v2.0.0/mutex_flock.go:150 +0x4b1

                                                
                                                
goroutine 1123 [chan receive, 105 minutes]:
testing.(*T).Parallel(0xc001f34680)
	/usr/local/go/src/testing/testing.go:1483 +0x215
k8s.io/minikube/test/integration.MaybeParallel(0xc001f34680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertExpiration(0xc001f34680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:115 +0x39
testing.tRunner(0xc001f34680, 0xc19acb0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1125 [chan receive, 105 minutes]:
testing.(*T).Parallel(0xc001f349c0)
	/usr/local/go/src/testing/testing.go:1483 +0x215
k8s.io/minikube/test/integration.MaybeParallel(0xc001f349c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdFlag(0xc001f349c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:83 +0x92
testing.tRunner(0xc001f349c0, 0xc19acf8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1126 [chan receive, 105 minutes]:
testing.(*T).Parallel(0xc001f34b60)
	/usr/local/go/src/testing/testing.go:1483 +0x215
k8s.io/minikube/test/integration.MaybeParallel(0xc001f34b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc001f34b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc001f34b60, 0xc19acf0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1129 [chan receive, 105 minutes]:
testing.(*T).Parallel(0xc001f35040)
	/usr/local/go/src/testing/testing.go:1483 +0x215
k8s.io/minikube/test/integration.MaybeParallel(0xc001f35040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestHyperKitDriverInstallOrUpdate(0xc001f35040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/driver_install_or_update_test.go:108 +0x39
testing.tRunner(0xc001f35040, 0xc19ad10)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1206 [IO wait, 105 minutes]:
internal/poll.runtime_pollWait(0x55159120, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001923b80?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc001923b80)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc001923b80)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc00201cac0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc00201cac0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc000231a40, {0xc1c1e20, 0xc00201cac0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc000231a40)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0x8e2c844?, 0xc001f481a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1203
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 1515 [chan send, 101 minutes]:
os/exec.(*Cmd).watchCtx(0xc000003500, 0xc000523980)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1514
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1813 [select, 101 minutes]:
net/http.(*persistConn).readLoop(0xc0019df680)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1825
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 1401 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1400
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2417 [syscall, 71 minutes]:
syscall.syscall6(0xc001ae1f80?, 0x1000000000010?, 0x10100000015?, 0x54ca8a78?, 0x90?, 0xe1f4108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc001575548?, 0x8cb30c5?, 0x90?, 0xc1046c0?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x8de3885?, 0xc00157557c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001d80240)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001474000)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001474000)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0007ff1e0, 0xc001474000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.PostMortemLogs(0xc0007ff1e0, {0xc0019e2570, 0x14}, {0x0, 0x0, 0xc0015ac8c0?})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:231 +0x3aa
runtime.Goexit()
	/usr/local/go/src/runtime/panic.go:626 +0x5e
testing.(*common).FailNow(0xc0007ff1e0)
	/usr/local/go/src/testing/testing.go:1005 +0x4a
testing.(*common).Fatalf(0xc0007ff1e0, {0xaab2c90?, 0xffffffffffffffff?}, {0xc001575f00?, 0x11?, 0x12?})
	/usr/local/go/src/testing/testing.go:1089 +0x5e
k8s.io/minikube/test/integration.validateStartWithMount({0xc1cf190, 0xc000432380}, 0xc0007ff1e0, {0xc0019e2570, 0x14})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/mount_start_test.go:100 +0x44f
k8s.io/minikube/test/integration.TestMountStart.func1.1(0xc0007ff1e0?)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/mount_start_test.go:83 +0x31
testing.tRunner(0xc0007ff1e0, 0xc0012a6640)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2400
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1436 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0xc1c5a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1338
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 1400 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xc1cf350, 0xc000522ae0}, 0xc0013aa750, 0xc001373f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xc1cf350, 0xc000522ae0}, 0x30?, 0xc0013aa750, 0xc0013aa798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xc1cf350?, 0xc000522ae0?}, 0xc001f49d40?, 0x8de6540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013aa7d0?, 0x8e2c844?, 0xc001ffaf00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:145 +0x29a

                                                
                                                
goroutine 1399 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001eebf10, 0x29)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc0020cbd80?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0xc1e93a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/util/workqueue/queue.go:282 +0x98
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001eebf40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005db100, {0xc1a9020, 0xc0017890e0}, 0x1, 0xc000522ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005db100, 0x3b9aca00, 0x0, 0x1, 0xc000522ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.31.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1437
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:143 +0x1ef

                                                
                                                
goroutine 1437 [chan receive, 103 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001eebf40, 0xc000522ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cert_rotation.go:150 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1338
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1711 [chan send, 101 minutes]:
os/exec.(*Cmd).watchCtx(0xc00195b800, 0xc001a18d20)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1710
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1741 [chan send, 101 minutes]:
os/exec.(*Cmd).watchCtx(0xc001a90d80, 0xc001a19e60)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1740
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2400 [chan receive, 87 minutes]:
testing.(*T).Run(0xc0007feea0, {0xaa6055a?, 0x8de6380?}, 0xc0012a6640)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestMountStart.func1(0xc0007feea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/mount_start_test.go:82 +0x1be
testing.tRunner(0xc0007feea0, 0xc0012a6600)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2399
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1814 [select, 101 minutes]:
net/http.(*persistConn).writeLoop(0xc0019df680)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1825
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1808 [chan send, 101 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b39680, 0xc001b6a8a0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1293
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                    

Test pass (158/175)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.06
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.6
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.31.0/json-events 6.06
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.3
18 TestDownloadOnly/v1.31.0/DeleteAll 0.34
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.21
20 TestDownloadOnlyKic 1.56
21 TestBinaryMirror 1.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
27 TestAddons/Setup 224.44
29 TestAddons/serial/Volcano 42.89
31 TestAddons/serial/GCPAuth/Namespaces 0.11
35 TestAddons/parallel/InspektorGadget 10.64
36 TestAddons/parallel/MetricsServer 5.64
37 TestAddons/parallel/HelmTiller 10.25
39 TestAddons/parallel/CSI 57.95
40 TestAddons/parallel/Headlamp 18.64
41 TestAddons/parallel/CloudSpanner 5.52
42 TestAddons/parallel/LocalPath 55.37
43 TestAddons/parallel/NvidiaDevicePlugin 6.5
44 TestAddons/parallel/Yakd 10.6
45 TestAddons/StoppedEnableDisable 11.38
56 TestErrorSpam/setup 21.32
57 TestErrorSpam/start 2
58 TestErrorSpam/status 0.83
59 TestErrorSpam/pause 1.58
60 TestErrorSpam/unpause 1.5
61 TestErrorSpam/stop 2.43
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 32.82
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 27.44
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 5.59
73 TestFunctional/serial/CacheCmd/cache/add_local 1.47
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
75 TestFunctional/serial/CacheCmd/cache/list 0.08
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.98
78 TestFunctional/serial/CacheCmd/cache/delete 0.17
79 TestFunctional/serial/MinikubeKubectlCmd 1.21
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.57
81 TestFunctional/serial/ExtraConfig 41.01
82 TestFunctional/serial/ComponentHealth 0.06
83 TestFunctional/serial/LogsCmd 3.15
84 TestFunctional/serial/LogsFileCmd 2.86
85 TestFunctional/serial/InvalidService 4.2
87 TestFunctional/parallel/ConfigCmd 0.65
88 TestFunctional/parallel/DashboardCmd 15.97
89 TestFunctional/parallel/DryRun 1.15
90 TestFunctional/parallel/InternationalLanguage 0.57
91 TestFunctional/parallel/StatusCmd 0.81
96 TestFunctional/parallel/AddonsCmd 0.24
97 TestFunctional/parallel/PersistentVolumeClaim 25.54
99 TestFunctional/parallel/SSHCmd 0.53
100 TestFunctional/parallel/CpCmd 1.65
101 TestFunctional/parallel/MySQL 27.4
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 1.88
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.26
111 TestFunctional/parallel/License 0.64
112 TestFunctional/parallel/Version/short 0.1
113 TestFunctional/parallel/Version/components 0.48
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.3
119 TestFunctional/parallel/ImageCommands/Setup 1.78
120 TestFunctional/parallel/DockerEnv/bash 1.1
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.14
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.57
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.95
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.49
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.17
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.22
142 TestFunctional/parallel/ServiceCmd/DeployApp 8.25
143 TestFunctional/parallel/ServiceCmd/List 0.88
144 TestFunctional/parallel/ServiceCmd/JSONOutput 0.87
145 TestFunctional/parallel/ServiceCmd/HTTPS 15
146 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
147 TestFunctional/parallel/ProfileCmd/profile_list 0.37
148 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
149 TestFunctional/parallel/MountCmd/any-port 8.47
150 TestFunctional/parallel/MountCmd/specific-port 1.56
151 TestFunctional/parallel/ServiceCmd/Format 15
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.2
153 TestFunctional/parallel/ServiceCmd/URL 15
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 93.83
161 TestMultiControlPlane/serial/DeployApp 46.92
162 TestMultiControlPlane/serial/PingHostFromPods 1.39
163 TestMultiControlPlane/serial/AddWorkerNode 21.53
164 TestMultiControlPlane/serial/NodeLabels 0.06
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.69
166 TestMultiControlPlane/serial/CopyFile 16.38
167 TestMultiControlPlane/serial/StopSecondaryNode 11.42
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.5
169 TestMultiControlPlane/serial/RestartSecondaryNode 60.14
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.68
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 172.97
172 TestMultiControlPlane/serial/DeleteSecondaryNode 9.43
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.49
174 TestMultiControlPlane/serial/StopCluster 32.51
175 TestMultiControlPlane/serial/RestartCluster 81.76
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.49
177 TestMultiControlPlane/serial/AddSecondaryNode 36.18
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.67
181 TestImageBuild/serial/Setup 20.76
182 TestImageBuild/serial/NormalBuild 1.89
183 TestImageBuild/serial/BuildWithBuildArg 0.81
184 TestImageBuild/serial/BuildWithDockerIgnore 0.61
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.6
189 TestJSONOutput/start/Command 59.73
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.46
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.47
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.61
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.58
214 TestKicCustomNetwork/create_custom_network 22.74
215 TestKicCustomNetwork/use_default_bridge_network 22.66
216 TestKicExistingNetwork 22.64
217 TestKicCustomSubnet 22.75
218 TestKicStaticIP 23.24
219 TestMainNoArgs 0.08
220 TestMinikubeProfile 46.72
x
+
TestDownloadOnly/v1.20.0/json-events (14.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-488000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-488000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker : (14.056212275s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-488000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-488000: exit status 85 (302.011108ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-488000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |          |
	|         | -p download-only-488000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 09:50:35
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 09:50:35.202367    1996 out.go:345] Setting OutFile to fd 1 ...
	I0828 09:50:35.202643    1996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:35.202648    1996 out.go:358] Setting ErrFile to fd 2...
	I0828 09:50:35.202652    1996 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:35.202816    1996 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
	W0828 09:50:35.202909    1996 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19529-1451/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19529-1451/.minikube/config/config.json: no such file or directory
	I0828 09:50:35.204695    1996 out.go:352] Setting JSON to true
	I0828 09:50:35.231407    1996 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1205,"bootTime":1724862630,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0828 09:50:35.231602    1996 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 09:50:35.253211    1996 out.go:97] [download-only-488000] minikube v1.33.1 on Darwin 14.6.1
	I0828 09:50:35.253372    1996 notify.go:220] Checking for updates...
	W0828 09:50:35.253395    1996 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/preloaded-tarball: no such file or directory
	I0828 09:50:35.275194    1996 out.go:169] MINIKUBE_LOCATION=19529
	I0828 09:50:35.296095    1996 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19529-1451/kubeconfig
	I0828 09:50:35.317065    1996 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0828 09:50:35.340125    1996 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 09:50:35.361298    1996 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1451/.minikube
	W0828 09:50:35.404119    1996 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 09:50:35.404635    1996 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 09:50:35.429399    1996 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0828 09:50:35.429555    1996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 09:50:35.514301    1996 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:66 SystemTime:2024-08-28 16:50:35.505744087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768061440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0828 09:50:35.536061    1996 out.go:97] Using the docker driver based on user configuration
	I0828 09:50:35.536084    1996 start.go:297] selected driver: docker
	I0828 09:50:35.536094    1996 start.go:901] validating driver "docker" against <nil>
	I0828 09:50:35.536233    1996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 09:50:35.622174    1996 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:66 SystemTime:2024-08-28 16:50:35.614439258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768061440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0828 09:50:35.622366    1996 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 09:50:35.626820    1996 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0828 09:50:35.627425    1996 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 09:50:35.650894    1996 out.go:169] Using Docker Desktop driver with root privileges
	I0828 09:50:35.672347    1996 cni.go:84] Creating CNI manager for ""
	I0828 09:50:35.672391    1996 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0828 09:50:35.672505    1996 start.go:340] cluster config:
	{Name:download-only-488000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-488000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 09:50:35.694005    1996 out.go:97] Starting "download-only-488000" primary control-plane node in "download-only-488000" cluster
	I0828 09:50:35.694066    1996 cache.go:121] Beginning downloading kic base image for docker with docker
	I0828 09:50:35.714848    1996 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0828 09:50:35.714872    1996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 09:50:35.714912    1996 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 09:50:35.732839    1996 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 09:50:35.733420    1996 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 09:50:35.733559    1996 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 09:50:35.788612    1996 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0828 09:50:35.788634    1996 cache.go:56] Caching tarball of preloaded images
	I0828 09:50:35.788822    1996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 09:50:35.810342    1996 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0828 09:50:35.810377    1996 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0828 09:50:36.088092    1996 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0828 09:50:42.906970    1996 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0828 09:50:42.907210    1996 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0828 09:50:43.495680    1996 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0828 09:50:43.495904    1996 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/download-only-488000/config.json ...
	I0828 09:50:43.495929    1996 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/download-only-488000/config.json: {Name:mkf45793c0e1e685168a5005bb8b23e42901c358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:50:43.496603    1996 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0828 09:50:43.496915    1996 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-488000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-488000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-488000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-032000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-032000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=docker : (6.059374889s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-032000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-032000: exit status 85 (301.00531ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-488000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | -p download-only-488000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| delete  | -p download-only-488000        | download-only-488000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT | 28 Aug 24 09:50 PDT |
	| start   | -o=json --download-only        | download-only-032000 | jenkins | v1.33.1 | 28 Aug 24 09:50 PDT |                     |
	|         | -p download-only-032000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/28 09:50:50
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0828 09:50:50.374995    2054 out.go:345] Setting OutFile to fd 1 ...
	I0828 09:50:50.375725    2054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:50.375735    2054 out.go:358] Setting ErrFile to fd 2...
	I0828 09:50:50.375742    2054 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 09:50:50.376283    2054 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
	I0828 09:50:50.377864    2054 out.go:352] Setting JSON to true
	I0828 09:50:50.401154    2054 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":1220,"bootTime":1724862630,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0828 09:50:50.401274    2054 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 09:50:50.423101    2054 out.go:97] [download-only-032000] minikube v1.33.1 on Darwin 14.6.1
	I0828 09:50:50.423328    2054 notify.go:220] Checking for updates...
	I0828 09:50:50.445100    2054 out.go:169] MINIKUBE_LOCATION=19529
	I0828 09:50:50.466310    2054 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19529-1451/kubeconfig
	I0828 09:50:50.486968    2054 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0828 09:50:50.508187    2054 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 09:50:50.529292    2054 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1451/.minikube
	W0828 09:50:50.571221    2054 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0828 09:50:50.571816    2054 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 09:50:50.598259    2054 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0828 09:50:50.598393    2054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 09:50:50.682945    2054 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:66 SystemTime:2024-08-28 16:50:50.67399973 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768061440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-
g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-des
ktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugi
ns/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0828 09:50:50.704317    2054 out.go:97] Using the docker driver based on user configuration
	I0828 09:50:50.704364    2054 start.go:297] selected driver: docker
	I0828 09:50:50.704380    2054 start.go:901] validating driver "docker" against <nil>
	I0828 09:50:50.704580    2054 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 09:50:50.786772    2054 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:66 SystemTime:2024-08-28 16:50:50.778452482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768061440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0828 09:50:50.786970    2054 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0828 09:50:50.789983    2054 start_flags.go:393] Using suggested 8100MB memory alloc based on sys=32768MB, container=15991MB
	I0828 09:50:50.790135    2054 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0828 09:50:50.811037    2054 out.go:169] Using Docker Desktop driver with root privileges
	I0828 09:50:50.832289    2054 cni.go:84] Creating CNI manager for ""
	I0828 09:50:50.832332    2054 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0828 09:50:50.832358    2054 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0828 09:50:50.832538    2054 start.go:340] cluster config:
	{Name:download-only-032000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:8100 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-032000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 09:50:50.854102    2054 out.go:97] Starting "download-only-032000" primary control-plane node in "download-only-032000" cluster
	I0828 09:50:50.854144    2054 cache.go:121] Beginning downloading kic base image for docker with docker
	I0828 09:50:50.875015    2054 out.go:97] Pulling base image v0.0.44-1724775115-19521 ...
	I0828 09:50:50.875074    2054 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:50:50.875160    2054 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local docker daemon
	I0828 09:50:50.893221    2054 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce to local cache
	I0828 09:50:50.893407    2054 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory
	I0828 09:50:50.893435    2054 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce in local cache directory, skipping pull
	I0828 09:50:50.893449    2054 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce exists in cache, skipping pull
	I0828 09:50:50.893464    2054 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce as a tarball
	I0828 09:50:50.934709    2054 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0828 09:50:50.934737    2054 cache.go:56] Caching tarball of preloaded images
	I0828 09:50:50.935065    2054 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:50:50.956209    2054 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0828 09:50:50.956272    2054 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0828 09:50:51.192349    2054 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4?checksum=md5:2dd98f97b896d7a4f012ee403b477cc8 -> /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0828 09:50:54.682714    2054 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0828 09:50:54.682898    2054 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 ...
	I0828 09:50:55.154271    2054 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0828 09:50:55.154508    2054 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/download-only-032000/config.json ...
	I0828 09:50:55.154531    2054 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/download-only-032000/config.json: {Name:mk1191a551b1571522db7892594dc667215fffde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0828 09:50:55.154830    2054 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0828 09:50:55.155045    2054 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19529-1451/.minikube/cache/darwin/amd64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-032000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-032000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-032000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-183000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-183000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-183000
--- PASS: TestDownloadOnlyKic (1.56s)

                                                
                                    
x
+
TestBinaryMirror (1.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-590000 --alsologtostderr --binary-mirror http://127.0.0.1:49339 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-590000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-590000
--- PASS: TestBinaryMirror (1.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-376000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-376000: exit status 85 (210.662773ms)

                                                
                                                
-- stdout --
	* Profile "addons-376000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-376000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-376000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-376000: exit status 85 (189.730013ms)

                                                
                                                
-- stdout --
	* Profile "addons-376000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-376000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (224.44s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-376000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-376000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m44.436557027s)
--- PASS: TestAddons/Setup (224.44s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.89s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 16.870039ms
addons_test.go:905: volcano-admission stabilized in 16.916417ms
addons_test.go:913: volcano-controller stabilized in 17.041971ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-559zl" [b7813e69-4a09-4123-bdbf-6734418d7715] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.005373773s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-ztg5t" [b5d1e3e2-ac47-4727-9bf5-27f964b7d07e] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003721324s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-klxn8" [a73611c3-cf69-4c5d-9891-cfdadcf64c27] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00322882s
addons_test.go:932: (dbg) Run:  kubectl --context addons-376000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-376000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-376000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [500af48b-f20d-4b77-9d6f-9f35b3a7dcc9] Pending
helpers_test.go:344: "test-job-nginx-0" [500af48b-f20d-4b77-9d6f-9f35b3a7dcc9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [500af48b-f20d-4b77-9d6f-9f35b3a7dcc9] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.004628453s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-376000 addons disable volcano --alsologtostderr -v=1: (10.562248637s)
--- PASS: TestAddons/serial/Volcano (42.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-376000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-376000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-g57hh" [492bce28-3ac3-422a-9b8d-6fc38b28ec30] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004576292s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-376000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-376000: (5.630975909s)
--- PASS: TestAddons/parallel/InspektorGadget (10.64s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.255604ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-nvqvh" [2fae3987-7824-4b7c-8daf-259b4f25deb3] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004407266s
addons_test.go:417: (dbg) Run:  kubectl --context addons-376000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.64s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.25s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 2.993812ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-sh2dj" [aef15770-b038-42fc-8810-27971a2dac93] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005044s
addons_test.go:475: (dbg) Run:  kubectl --context addons-376000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-376000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.702647689s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.183378ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-376000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-376000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [62c80a29-dff2-4dc1-8757-2b25c837a3ce] Pending
helpers_test.go:344: "task-pv-pod" [62c80a29-dff2-4dc1-8757-2b25c837a3ce] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [62c80a29-dff2-4dc1-8757-2b25c837a3ce] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.005645585s
addons_test.go:590: (dbg) Run:  kubectl --context addons-376000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-376000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-376000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-376000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-376000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-376000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-376000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [99527e93-f0bb-4a85-af69-1602eea0c185] Pending
helpers_test.go:344: "task-pv-pod-restore" [99527e93-f0bb-4a85-af69-1602eea0c185] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [99527e93-f0bb-4a85-af69-1602eea0c185] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006894078s
addons_test.go:632: (dbg) Run:  kubectl --context addons-376000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-376000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-376000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-376000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.64674922s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (57.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-376000 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-376000 --alsologtostderr -v=1: (1.039923857s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-7mf5g" [8b16fea7-2a35-4d5c-93f8-ef9ca0c4ccef] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-7mf5g" [8b16fea7-2a35-4d5c-93f8-ef9ca0c4ccef] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.007198829s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-376000 addons disable headlamp --alsologtostderr -v=1: (5.596367678s)
--- PASS: TestAddons/parallel/Headlamp (18.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-zbg4x" [ed59dc67-0bfd-4d20-a151-0ac3904cebda] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004867435s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-376000
--- PASS: TestAddons/parallel/CloudSpanner (5.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.37s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-376000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-376000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [33f65bd6-240b-4ea5-97fb-fe22e30fc593] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [33f65bd6-240b-4ea5-97fb-fe22e30fc593] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [33f65bd6-240b-4ea5-97fb-fe22e30fc593] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003430466s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-376000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 ssh "cat /opt/local-path-provisioner/pvc-791c6585-c16e-4d5e-a4a4-3f2a7bc0c09f_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-376000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-376000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-376000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.472370121s)
--- PASS: TestAddons/parallel/LocalPath (55.37s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4pp2v" [0547e8e7-d64c-4673-b30d-2c858c7224a3] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.006029985s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-376000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-j7lpb" [84831e1d-7fe1-4ba9-97e4-5e2330be6308] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004679665s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-376000 addons disable yakd --alsologtostderr -v=1: (5.596500273s)
--- PASS: TestAddons/parallel/Yakd (10.60s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.38s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-376000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-376000: (10.829171876s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-376000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-376000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-376000
--- PASS: TestAddons/StoppedEnableDisable (11.38s)

                                                
                                    
x
+
TestErrorSpam/setup (21.32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-582000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-582000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 --driver=docker : (21.315804832s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (21.32s)

                                                
                                    
x
+
TestErrorSpam/start (2s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 start --dry-run
--- PASS: TestErrorSpam/start (2.00s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 pause
--- PASS: TestErrorSpam/pause (1.58s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (2.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 stop: (1.93028848s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-582000 stop
--- PASS: TestErrorSpam/stop (2.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19529-1451/.minikube/files/etc/test/nested/copy/1994/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (32.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-425000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-amd64 start -p functional-425000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (32.819521103s)
--- PASS: TestFunctional/serial/StartWithProxy (32.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (27.44s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-425000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-amd64 start -p functional-425000 --alsologtostderr -v=8: (27.435088886s)
functional_test.go:663: soft start took 27.435744365s for "functional-425000" cluster.
--- PASS: TestFunctional/serial/SoftStart (27.44s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-425000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-425000 cache add registry.k8s.io/pause:3.1: (2.011517783s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-425000 cache add registry.k8s.io/pause:3.3: (1.998354656s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-amd64 -p functional-425000 cache add registry.k8s.io/pause:latest: (1.578688926s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local212991528/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 cache add minikube-local-cache-test:functional-425000
functional_test.go:1089: (dbg) Done: out/minikube-darwin-amd64 -p functional-425000 cache add minikube-local-cache-test:functional-425000: (1.027932183s)
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 cache delete minikube-local-cache-test:functional-425000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-425000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (257.210277ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-darwin-amd64 -p functional-425000 cache reload: (1.177655157s)
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 kubectl -- --context functional-425000 get pods
functional_test.go:716: (dbg) Done: out/minikube-darwin-amd64 -p functional-425000 kubectl -- --context functional-425000 get pods: (1.206267103s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-425000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-425000 get pods: (1.569399201s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.57s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-425000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-amd64 start -p functional-425000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.010427967s)
functional_test.go:761: restart took 41.010550625s for "functional-425000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-425000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 logs
functional_test.go:1236: (dbg) Done: out/minikube-darwin-amd64 -p functional-425000 logs: (3.148122163s)
--- PASS: TestFunctional/serial/LogsCmd (3.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2938837751/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-darwin-amd64 -p functional-425000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2938837751/001/logs.txt: (2.86368562s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.2s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-425000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-425000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-425000: exit status 115 (377.112966ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31720 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-425000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.20s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 config get cpus: exit status 14 (56.395851ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 config get cpus: exit status 14 (78.864103ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-425000 --alsologtostderr -v=1]
E0828 10:09:55.551126    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:910: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-425000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4225: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.97s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-425000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-425000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (549.77837ms)

                                                
                                                
-- stdout --
	* [functional-425000] minikube v1.33.1 on Darwin 14.6.1
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1451/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1451/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:09:50.553406    4173 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:09:50.553581    4173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:09:50.553587    4173 out.go:358] Setting ErrFile to fd 2...
	I0828 10:09:50.553591    4173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:09:50.553761    4173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
	I0828 10:09:50.555126    4173 out.go:352] Setting JSON to false
	I0828 10:09:50.577580    4173 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2360,"bootTime":1724862630,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0828 10:09:50.577779    4173 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:09:50.599824    4173 out.go:177] * [functional-425000] minikube v1.33.1 on Darwin 14.6.1
	I0828 10:09:50.642195    4173 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:09:50.642218    4173 notify.go:220] Checking for updates...
	I0828 10:09:50.683965    4173 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1451/kubeconfig
	I0828 10:09:50.705158    4173 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0828 10:09:50.726309    4173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:09:50.747204    4173 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1451/.minikube
	I0828 10:09:50.768446    4173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:09:50.789716    4173 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:09:50.790281    4173 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:09:50.813810    4173 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0828 10:09:50.813995    4173 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 10:09:50.894748    4173 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2024-08-28 17:09:50.88601592 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:htt
ps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768061440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-
g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-des
ktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugi
ns/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0828 10:09:50.916426    4173 out.go:177] * Using the docker driver based on existing profile
	I0828 10:09:50.937604    4173 start.go:297] selected driver: docker
	I0828 10:09:50.937632    4173 start.go:901] validating driver "docker" against &{Name:functional-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-425000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:09:50.937764    4173 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:09:50.963611    4173 out.go:201] 
	W0828 10:09:50.984305    4173 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0828 10:09:51.005593    4173 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-425000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-425000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
E0828 10:09:50.429086    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-425000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (567.104589ms)

                                                
                                                
-- stdout --
	* [functional-425000] minikube v1.33.1 sur Darwin 14.6.1
	  - MINIKUBE_LOCATION=19529
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1451/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1451/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:09:49.981146    4157 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:09:49.981351    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:09:49.981356    4157 out.go:358] Setting ErrFile to fd 2...
	I0828 10:09:49.981360    4157 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:09:49.981578    4157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
	I0828 10:09:49.983198    4157 out.go:352] Setting JSON to false
	I0828 10:09:50.006584    4157 start.go:129] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":2359,"bootTime":1724862630,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.6.1","kernelVersion":"23.6.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0828 10:09:50.006705    4157 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0828 10:09:50.029220    4157 out.go:177] * [functional-425000] minikube v1.33.1 sur Darwin 14.6.1
	I0828 10:09:50.070999    4157 out.go:177]   - MINIKUBE_LOCATION=19529
	I0828 10:09:50.071035    4157 notify.go:220] Checking for updates...
	I0828 10:09:50.113787    4157 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19529-1451/kubeconfig
	I0828 10:09:50.134968    4157 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0828 10:09:50.155792    4157 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0828 10:09:50.176995    4157 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1451/.minikube
	I0828 10:09:50.198093    4157 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0828 10:09:50.219063    4157 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:09:50.219453    4157 driver.go:392] Setting default libvirt URI to qemu:///system
	I0828 10:09:50.243115    4157 docker.go:123] docker version: linux-27.0.3:Docker Desktop 4.32.0 (157355)
	I0828 10:09:50.243283    4157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0828 10:09:50.327938    4157 info.go:266] docker info: {ID:411b0150-1087-4b28-afd8-60215a002391 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2024-08-28 17:09:50.319509876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:6.6.32-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:ht
tps://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:16768061440 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/jenkins/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0
-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1-desktop.1] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1-desktop.1] map[Name:debug Path:/Users/jenkins/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-debug] ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.32] map[Name:desktop Path:/Users/jenkins/.docker/cli-plugins/docker-de
sktop SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-desktop] ShortDescription:Docker Desktop commands (Alpha) Vendor:Docker Inc. Version:v0.0.14] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.25] map[Name:feedback Path:/Users/jenkins/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-feedback] ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plug
ins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.3.0] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.10.0]] Warnings:<nil>}}
	I0828 10:09:50.387427    4157 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0828 10:09:50.408462    4157 start.go:297] selected driver: docker
	I0828 10:09:50.408496    4157 start.go:901] validating driver "docker" against &{Name:functional-425000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-425000 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: M
ountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0828 10:09:50.408605    4157 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0828 10:09:50.434407    4157 out.go:201] 
	W0828 10:09:50.455547    4157 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0828 10:09:50.476350    4157 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f2c349a4-c6ab-4a0c-b05a-1447eb3d79b6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004784024s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-425000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-425000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-425000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-425000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8a0c9631-a2d3-4525-8a2f-0b145f541cd8] Pending
helpers_test.go:344: "sp-pod" [8a0c9631-a2d3-4525-8a2f-0b145f541cd8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8a0c9631-a2d3-4525-8a2f-0b145f541cd8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004205505s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-425000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-425000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-425000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [86bc56aa-ad2d-48e0-b714-ec89c0d9d498] Pending
helpers_test.go:344: "sp-pod" [86bc56aa-ad2d-48e0-b714-ec89c0d9d498] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [86bc56aa-ad2d-48e0-b714-ec89c0d9d498] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0050086s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-425000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.54s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh -n functional-425000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 cp functional-425000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd3606454359/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh -n functional-425000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh -n functional-425000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-425000 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-5v66r" [34f0e5f0-366b-4d3f-aab5-7abad74d9caa] Pending
helpers_test.go:344: "mysql-6cdb49bbb-5v66r" [34f0e5f0-366b-4d3f-aab5-7abad74d9caa] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-5v66r" [34f0e5f0-366b-4d3f-aab5-7abad74d9caa] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003988646s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-425000 exec mysql-6cdb49bbb-5v66r -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-425000 exec mysql-6cdb49bbb-5v66r -- mysql -ppassword -e "show databases;": exit status 1 (135.526767ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-425000 exec mysql-6cdb49bbb-5v66r -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-425000 exec mysql-6cdb49bbb-5v66r -- mysql -ppassword -e "show databases;": exit status 1 (140.685181ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-425000 exec mysql-6cdb49bbb-5v66r -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-425000 exec mysql-6cdb49bbb-5v66r -- mysql -ppassword -e "show databases;": exit status 1 (117.937492ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-425000 exec mysql-6cdb49bbb-5v66r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1994/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo cat /etc/test/nested/copy/1994/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1994.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo cat /etc/ssl/certs/1994.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1994.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo cat /usr/share/ca-certificates/1994.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/19942.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo cat /etc/ssl/certs/19942.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/19942.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo cat /usr/share/ca-certificates/19942.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-425000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 ssh "sudo systemctl is-active crio": exit status 1 (256.53655ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-425000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-425000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-425000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-425000 image ls --format short --alsologtostderr:
I0828 10:10:08.886522    4255 out.go:345] Setting OutFile to fd 1 ...
I0828 10:10:08.886839    4255 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:08.886845    4255 out.go:358] Setting ErrFile to fd 2...
I0828 10:10:08.886848    4255 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:08.887063    4255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
I0828 10:10:08.887656    4255 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:08.887787    4255 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:08.888162    4255 cli_runner.go:164] Run: docker container inspect functional-425000 --format={{.State.Status}}
I0828 10:10:08.908782    4255 ssh_runner.go:195] Run: systemctl --version
I0828 10:10:08.908855    4255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425000
I0828 10:10:08.927353    4255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50095 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/functional-425000/id_rsa Username:docker}
I0828 10:10:09.018077    4255 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-425000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-425000 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| docker.io/library/nginx                     | latest            | 5ef79149e0ec8 | 188MB  |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/minikube-local-cache-test | functional-425000 | b29474c5eedb1 | 30B    |
| docker.io/library/nginx                     | alpine            | 0f0eda053dc5c | 43.3MB |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| localhost/my-image                          | functional-425000 | da3c574b7987c | 1.24MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-425000 image ls --format table --alsologtostderr:
I0828 10:10:12.926015    4282 out.go:345] Setting OutFile to fd 1 ...
I0828 10:10:12.926291    4282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:12.926297    4282 out.go:358] Setting ErrFile to fd 2...
I0828 10:10:12.926300    4282 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:12.926460    4282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
I0828 10:10:12.927081    4282 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:12.927171    4282 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:12.927554    4282 cli_runner.go:164] Run: docker container inspect functional-425000 --format={{.State.Status}}
I0828 10:10:12.945870    4282 ssh_runner.go:195] Run: systemctl --version
I0828 10:10:12.945940    4282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425000
I0828 10:10:12.964936    4282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50095 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/functional-425000/id_rsa Username:docker}
I0828 10:10:13.054610    4282 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-425000 image ls --format json --alsologtostderr:
[{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"b29474c5eedb186e51c80e6bdc665d65252441579ebf6ac7f91e750986faa3c9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-425000"],"size":"30"},{"id":"0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43300000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00
797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.
io/pause:3.10"],"size":"736000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-425000"],"size":"4940000"},{"id":"da3c574b7987c5d69bbe5545777d107311dd53e7193aaa973935aa7ec955d51b","repoDigests":[],"repoTags":["localhost/my-image:functional-425000"],"size":"1240000"},{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"s
ize":"88400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-425000 image ls --format json --alsologtostderr:
I0828 10:10:12.693888    4278 out.go:345] Setting OutFile to fd 1 ...
I0828 10:10:12.694083    4278 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:12.694088    4278 out.go:358] Setting ErrFile to fd 2...
I0828 10:10:12.694092    4278 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:12.694280    4278 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
I0828 10:10:12.694860    4278 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:12.694952    4278 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:12.695339    4278 cli_runner.go:164] Run: docker container inspect functional-425000 --format={{.State.Status}}
I0828 10:10:12.714658    4278 ssh_runner.go:195] Run: systemctl --version
I0828 10:10:12.714731    4278 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425000
I0828 10:10:12.733161    4278 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50095 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/functional-425000/id_rsa Username:docker}
I0828 10:10:12.822656    4278 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-425000 image ls --format yaml --alsologtostderr:
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: b29474c5eedb186e51c80e6bdc665d65252441579ebf6ac7f91e750986faa3c9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-425000
size: "30"
- id: 5ef79149e0ec84a7a9f9284c3f91aa3c20608f8391f5445eabe92ef07dbda03c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-425000
size: "4940000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 0f0eda053dc5c4c8240f11542cb4d200db6a11d476a4189b1eb0a3afa5684a9a
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43300000"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-425000 image ls --format yaml --alsologtostderr:
I0828 10:10:09.121489    4260 out.go:345] Setting OutFile to fd 1 ...
I0828 10:10:09.121772    4260 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:09.121777    4260 out.go:358] Setting ErrFile to fd 2...
I0828 10:10:09.121781    4260 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:09.121947    4260 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
I0828 10:10:09.122538    4260 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:09.122634    4260 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:09.123000    4260 cli_runner.go:164] Run: docker container inspect functional-425000 --format={{.State.Status}}
I0828 10:10:09.141777    4260 ssh_runner.go:195] Run: systemctl --version
I0828 10:10:09.141852    4260 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425000
I0828 10:10:09.160940    4260 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50095 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/functional-425000/id_rsa Username:docker}
I0828 10:10:09.251501    4260 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 ssh pgrep buildkitd: exit status 1 (228.611844ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image build -t localhost/my-image:functional-425000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-amd64 -p functional-425000 image build -t localhost/my-image:functional-425000 testdata/build --alsologtostderr: (2.844313053s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-425000 image build -t localhost/my-image:functional-425000 testdata/build --alsologtostderr:
I0828 10:10:09.619677    4270 out.go:345] Setting OutFile to fd 1 ...
I0828 10:10:09.619963    4270 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:09.619969    4270 out.go:358] Setting ErrFile to fd 2...
I0828 10:10:09.619973    4270 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0828 10:10:09.620140    4270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
I0828 10:10:09.620762    4270 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:09.622001    4270 config.go:182] Loaded profile config "functional-425000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0828 10:10:09.622416    4270 cli_runner.go:164] Run: docker container inspect functional-425000 --format={{.State.Status}}
I0828 10:10:09.641345    4270 ssh_runner.go:195] Run: systemctl --version
I0828 10:10:09.641414    4270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-425000
I0828 10:10:09.659659    4270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50095 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/functional-425000/id_rsa Username:docker}
I0828 10:10:09.749815    4270 build_images.go:161] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2915741970.tar
I0828 10:10:09.749911    4270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0828 10:10:09.758253    4270 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2915741970.tar
I0828 10:10:09.762105    4270 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2915741970.tar: stat -c "%s %y" /var/lib/minikube/build/build.2915741970.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2915741970.tar': No such file or directory
I0828 10:10:09.762135    4270 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2915741970.tar --> /var/lib/minikube/build/build.2915741970.tar (3072 bytes)
I0828 10:10:09.782901    4270 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2915741970
I0828 10:10:09.791144    4270 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2915741970 -xf /var/lib/minikube/build/build.2915741970.tar
I0828 10:10:09.799879    4270 docker.go:360] Building image: /var/lib/minikube/build/build.2915741970
I0828 10:10:09.799950    4270 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-425000 /var/lib/minikube/build/build.2915741970
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:da3c574b7987c5d69bbe5545777d107311dd53e7193aaa973935aa7ec955d51b done
#8 naming to localhost/my-image:functional-425000 done
#8 DONE 0.0s
I0828 10:10:12.362323    4270 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-425000 /var/lib/minikube/build/build.2915741970: (2.562432488s)
I0828 10:10:12.362379    4270 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2915741970
I0828 10:10:12.370803    4270 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2915741970.tar
I0828 10:10:12.379483    4270 build_images.go:217] Built localhost/my-image:functional-425000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.2915741970.tar
I0828 10:10:12.379538    4270 build_images.go:133] succeeded building to: functional-425000
I0828 10:10:12.379544    4270 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.741510439s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-425000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-425000 docker-env) && out/minikube-darwin-amd64 status -p functional-425000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-425000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image load --daemon kicbase/echo-server:functional-425000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image load --daemon kicbase/echo-server:functional-425000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-425000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image load --daemon kicbase/echo-server:functional-425000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image save kicbase/echo-server:functional-425000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image rm kicbase/echo-server:functional-425000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-425000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 image save --daemon kicbase/echo-server:functional-425000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-425000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-425000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-425000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-425000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3842: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-425000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-425000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-425000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [591d962d-795d-4560-b2fc-b76e984bb4f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [591d962d-795d-4560-b2fc-b76e984bb4f5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.00496715s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-425000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-425000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3860: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-425000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-425000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-kgnjv" [a2317de1-5210-450d-90a8-57a0dba750a4] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-kgnjv" [a2317de1-5210-450d-90a8-57a0dba750a4] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004999775s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 service list -o json
functional_test.go:1494: Took "874.779796ms" to run "out/minikube-darwin-amd64 -p functional-425000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 service --namespace=default --https --url hello-node: signal: killed (15.002990902s)

                                                
                                                
-- stdout --
	https://127.0.0.1:50406

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1522: found endpoint: https://127.0.0.1:50406
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1315: Took "288.954619ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1329: Took "79.604608ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1366: Took "289.933393ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1379: Took "78.744341ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port394843256/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724864976896574000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port394843256/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724864976896574000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port394843256/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724864976896574000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port394843256/001/test-1724864976896574000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (252.927491ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 28 17:09 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 28 17:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 28 17:09 test-1724864976896574000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh cat /mount-9p/test-1724864976896574000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-425000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [85db5d06-8728-431d-8dd7-bbf611d60638] Pending
helpers_test.go:344: "busybox-mount" [85db5d06-8728-431d-8dd7-bbf611d60638] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [85db5d06-8728-431d-8dd7-bbf611d60638] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [85db5d06-8728-431d-8dd7-bbf611d60638] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.006359949s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-425000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port394843256/001:/mount-9p --alsologtostderr -v=1] ...
E0828 10:09:45.271301    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:45.300982    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:45.314202    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:45.337280    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2811415270/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T /mount-9p | grep 9p"
E0828 10:09:45.378948    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:09:45.461169    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (251.018258ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0828 10:09:45.622396    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T /mount-9p | grep 9p"
E0828 10:09:45.943743    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2811415270/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "sudo umount -f /mount-9p"
E0828 10:09:46.585196    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 ssh "sudo umount -f /mount-9p": exit status 1 (226.199362ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-425000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port2811415270/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 service hello-node --url --format={{.IP}}: signal: killed (15.003042398s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3983259432/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3983259432/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3983259432/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T" /mount1: exit status 1 (340.112884ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0828 10:09:47.867449    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-425000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3983259432/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3983259432/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-425000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3983259432/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-amd64 -p functional-425000 service hello-node --url
E0828 10:10:05.792473    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
2024/08/28 10:10:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-425000 service hello-node --url: signal: killed (15.002449182s)

                                                
                                                
-- stdout --
	http://127.0.0.1:50533

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1565: found endpoint for hello-node: http://127.0.0.1:50533
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-425000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-425000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-425000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (93.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-178000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker 
E0828 10:10:26.273883    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:11:07.234565    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-178000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker : (1m33.116257424s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (93.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (46.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-178000 -- rollout status deployment/busybox: (6.170810949s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4 10.244.1.2'\n\n-- /stdout --"
E0828 10:12:29.153755    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-6h6g5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-btzcl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-dlsdr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-6h6g5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-btzcl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-dlsdr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-6h6g5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-btzcl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-dlsdr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (46.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-6h6g5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-6h6g5 -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-btzcl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-btzcl -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-dlsdr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-178000 -- exec busybox-7dff88458-dlsdr -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-178000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-178000 -v=7 --alsologtostderr: (20.67972103s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-178000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp testdata/cp-test.txt ha-178000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile947584134/001/cp-test_ha-178000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000:/home/docker/cp-test.txt ha-178000-m02:/home/docker/cp-test_ha-178000_ha-178000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m02 "sudo cat /home/docker/cp-test_ha-178000_ha-178000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000:/home/docker/cp-test.txt ha-178000-m03:/home/docker/cp-test_ha-178000_ha-178000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m03 "sudo cat /home/docker/cp-test_ha-178000_ha-178000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000:/home/docker/cp-test.txt ha-178000-m04:/home/docker/cp-test_ha-178000_ha-178000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m04 "sudo cat /home/docker/cp-test_ha-178000_ha-178000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp testdata/cp-test.txt ha-178000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile947584134/001/cp-test_ha-178000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m02:/home/docker/cp-test.txt ha-178000:/home/docker/cp-test_ha-178000-m02_ha-178000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000 "sudo cat /home/docker/cp-test_ha-178000-m02_ha-178000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m02:/home/docker/cp-test.txt ha-178000-m03:/home/docker/cp-test_ha-178000-m02_ha-178000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m03 "sudo cat /home/docker/cp-test_ha-178000-m02_ha-178000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m02:/home/docker/cp-test.txt ha-178000-m04:/home/docker/cp-test_ha-178000-m02_ha-178000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m04 "sudo cat /home/docker/cp-test_ha-178000-m02_ha-178000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp testdata/cp-test.txt ha-178000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile947584134/001/cp-test_ha-178000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m03:/home/docker/cp-test.txt ha-178000:/home/docker/cp-test_ha-178000-m03_ha-178000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000 "sudo cat /home/docker/cp-test_ha-178000-m03_ha-178000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m03:/home/docker/cp-test.txt ha-178000-m02:/home/docker/cp-test_ha-178000-m03_ha-178000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m02 "sudo cat /home/docker/cp-test_ha-178000-m03_ha-178000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m03:/home/docker/cp-test.txt ha-178000-m04:/home/docker/cp-test_ha-178000-m03_ha-178000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m04 "sudo cat /home/docker/cp-test_ha-178000-m03_ha-178000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp testdata/cp-test.txt ha-178000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m04:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiControlPlaneserialCopyFile947584134/001/cp-test_ha-178000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m04:/home/docker/cp-test.txt ha-178000:/home/docker/cp-test_ha-178000-m04_ha-178000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000 "sudo cat /home/docker/cp-test_ha-178000-m04_ha-178000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m04:/home/docker/cp-test.txt ha-178000-m02:/home/docker/cp-test_ha-178000-m04_ha-178000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m02 "sudo cat /home/docker/cp-test_ha-178000-m04_ha-178000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 cp ha-178000-m04:/home/docker/cp-test.txt ha-178000-m03:/home/docker/cp-test_ha-178000-m04_ha-178000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 ssh -n ha-178000-m03 "sudo cat /home/docker/cp-test_ha-178000-m04_ha-178000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-178000 node stop m02 -v=7 --alsologtostderr: (10.773428063s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr: exit status 7 (645.1326ms)

                                                
                                                
-- stdout --
	ha-178000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-178000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-178000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-178000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:13:31.256552    5086 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:13:31.256753    5086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:13:31.256759    5086 out.go:358] Setting ErrFile to fd 2...
	I0828 10:13:31.256763    5086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:13:31.256930    5086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
	I0828 10:13:31.257113    5086 out.go:352] Setting JSON to false
	I0828 10:13:31.257138    5086 mustload.go:65] Loading cluster: ha-178000
	I0828 10:13:31.257178    5086 notify.go:220] Checking for updates...
	I0828 10:13:31.257474    5086 config.go:182] Loaded profile config "ha-178000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:13:31.257496    5086 status.go:255] checking status of ha-178000 ...
	I0828 10:13:31.257946    5086 cli_runner.go:164] Run: docker container inspect ha-178000 --format={{.State.Status}}
	I0828 10:13:31.278159    5086 status.go:330] ha-178000 host status = "Running" (err=<nil>)
	I0828 10:13:31.278217    5086 host.go:66] Checking if "ha-178000" exists ...
	I0828 10:13:31.278518    5086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-178000
	I0828 10:13:31.297171    5086 host.go:66] Checking if "ha-178000" exists ...
	I0828 10:13:31.297458    5086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 10:13:31.297526    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-178000
	I0828 10:13:31.316036    5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50556 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/ha-178000/id_rsa Username:docker}
	I0828 10:13:31.404681    5086 ssh_runner.go:195] Run: systemctl --version
	I0828 10:13:31.409335    5086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 10:13:31.419584    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-178000
	I0828 10:13:31.438964    5086 kubeconfig.go:125] found "ha-178000" server: "https://127.0.0.1:50555"
	I0828 10:13:31.438995    5086 api_server.go:166] Checking apiserver status ...
	I0828 10:13:31.439034    5086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:13:31.449774    5086 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2309/cgroup
	W0828 10:13:31.458914    5086 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2309/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:13:31.458975    5086 ssh_runner.go:195] Run: ls
	I0828 10:13:31.462846    5086 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50555/healthz ...
	I0828 10:13:31.466675    5086 api_server.go:279] https://127.0.0.1:50555/healthz returned 200:
	ok
	I0828 10:13:31.466690    5086 status.go:422] ha-178000 apiserver status = Running (err=<nil>)
	I0828 10:13:31.466701    5086 status.go:257] ha-178000 status: &{Name:ha-178000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:13:31.466713    5086 status.go:255] checking status of ha-178000-m02 ...
	I0828 10:13:31.466946    5086 cli_runner.go:164] Run: docker container inspect ha-178000-m02 --format={{.State.Status}}
	I0828 10:13:31.485374    5086 status.go:330] ha-178000-m02 host status = "Stopped" (err=<nil>)
	I0828 10:13:31.485402    5086 status.go:343] host is not running, skipping remaining checks
	I0828 10:13:31.485411    5086 status.go:257] ha-178000-m02 status: &{Name:ha-178000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:13:31.485435    5086 status.go:255] checking status of ha-178000-m03 ...
	I0828 10:13:31.485746    5086 cli_runner.go:164] Run: docker container inspect ha-178000-m03 --format={{.State.Status}}
	I0828 10:13:31.504130    5086 status.go:330] ha-178000-m03 host status = "Running" (err=<nil>)
	I0828 10:13:31.504154    5086 host.go:66] Checking if "ha-178000-m03" exists ...
	I0828 10:13:31.504432    5086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-178000-m03
	I0828 10:13:31.522521    5086 host.go:66] Checking if "ha-178000-m03" exists ...
	I0828 10:13:31.522789    5086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 10:13:31.522836    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-178000-m03
	I0828 10:13:31.540876    5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50663 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/ha-178000-m03/id_rsa Username:docker}
	I0828 10:13:31.630638    5086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 10:13:31.640996    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-178000
	I0828 10:13:31.660054    5086 kubeconfig.go:125] found "ha-178000" server: "https://127.0.0.1:50555"
	I0828 10:13:31.660078    5086 api_server.go:166] Checking apiserver status ...
	I0828 10:13:31.660122    5086 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0828 10:13:31.670595    5086 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2220/cgroup
	W0828 10:13:31.679649    5086 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2220/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0828 10:13:31.679707    5086 ssh_runner.go:195] Run: ls
	I0828 10:13:31.683563    5086 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:50555/healthz ...
	I0828 10:13:31.687483    5086 api_server.go:279] https://127.0.0.1:50555/healthz returned 200:
	ok
	I0828 10:13:31.687495    5086 status.go:422] ha-178000-m03 apiserver status = Running (err=<nil>)
	I0828 10:13:31.687513    5086 status.go:257] ha-178000-m03 status: &{Name:ha-178000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:13:31.687524    5086 status.go:255] checking status of ha-178000-m04 ...
	I0828 10:13:31.687782    5086 cli_runner.go:164] Run: docker container inspect ha-178000-m04 --format={{.State.Status}}
	I0828 10:13:31.705462    5086 status.go:330] ha-178000-m04 host status = "Running" (err=<nil>)
	I0828 10:13:31.705499    5086 host.go:66] Checking if "ha-178000-m04" exists ...
	I0828 10:13:31.705779    5086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-178000-m04
	I0828 10:13:31.724203    5086 host.go:66] Checking if "ha-178000-m04" exists ...
	I0828 10:13:31.724455    5086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0828 10:13:31.724516    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-178000-m04
	I0828 10:13:31.742419    5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50790 SSHKeyPath:/Users/jenkins/minikube-integration/19529-1451/.minikube/machines/ha-178000-m04/id_rsa Username:docker}
	I0828 10:13:31.832510    5086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0828 10:13:31.843886    5086 status.go:257] ha-178000-m04 status: &{Name:ha-178000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (60.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 node start m02 -v=7 --alsologtostderr
E0828 10:13:43.235628    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:43.242267    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:43.253516    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:43.275117    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:43.316698    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:43.398439    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:43.559882    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:43.881300    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:44.523298    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:45.804766    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:48.366041    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:13:53.488550    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:14:03.730813    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:14:24.212128    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-178000 node start m02 -v=7 --alsologtostderr: (59.202344419s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (60.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (172.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-178000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-178000 -v=7 --alsologtostderr
E0828 10:14:45.262478    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:15:05.174339    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-178000 -v=7 --alsologtostderr: (33.826819786s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-178000 --wait=true -v=7 --alsologtostderr
E0828 10:15:12.991649    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:16:27.093617    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-178000 --wait=true -v=7 --alsologtostderr: (2m19.014836298s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-178000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (172.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-178000 node delete m03 -v=7 --alsologtostderr: (8.660602331s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-178000 stop -v=7 --alsologtostderr: (32.395004581s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr: exit status 7 (112.097582ms)

                                                
                                                
-- stdout --
	ha-178000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-178000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-178000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0828 10:18:08.510690    5484 out.go:345] Setting OutFile to fd 1 ...
	I0828 10:18:08.510882    5484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:18:08.510888    5484 out.go:358] Setting ErrFile to fd 2...
	I0828 10:18:08.510891    5484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0828 10:18:08.511074    5484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19529-1451/.minikube/bin
	I0828 10:18:08.511254    5484 out.go:352] Setting JSON to false
	I0828 10:18:08.511285    5484 mustload.go:65] Loading cluster: ha-178000
	I0828 10:18:08.511324    5484 notify.go:220] Checking for updates...
	I0828 10:18:08.511579    5484 config.go:182] Loaded profile config "ha-178000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0828 10:18:08.511599    5484 status.go:255] checking status of ha-178000 ...
	I0828 10:18:08.512000    5484 cli_runner.go:164] Run: docker container inspect ha-178000 --format={{.State.Status}}
	I0828 10:18:08.530277    5484 status.go:330] ha-178000 host status = "Stopped" (err=<nil>)
	I0828 10:18:08.530298    5484 status.go:343] host is not running, skipping remaining checks
	I0828 10:18:08.530305    5484 status.go:257] ha-178000 status: &{Name:ha-178000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:18:08.530326    5484 status.go:255] checking status of ha-178000-m02 ...
	I0828 10:18:08.530582    5484 cli_runner.go:164] Run: docker container inspect ha-178000-m02 --format={{.State.Status}}
	I0828 10:18:08.548542    5484 status.go:330] ha-178000-m02 host status = "Stopped" (err=<nil>)
	I0828 10:18:08.548568    5484 status.go:343] host is not running, skipping remaining checks
	I0828 10:18:08.548576    5484 status.go:257] ha-178000-m02 status: &{Name:ha-178000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0828 10:18:08.548587    5484 status.go:255] checking status of ha-178000-m04 ...
	I0828 10:18:08.548857    5484 cli_runner.go:164] Run: docker container inspect ha-178000-m04 --format={{.State.Status}}
	I0828 10:18:08.566580    5484 status.go:330] ha-178000-m04 host status = "Stopped" (err=<nil>)
	I0828 10:18:08.566602    5484 status.go:343] host is not running, skipping remaining checks
	I0828 10:18:08.566609    5484 status.go:257] ha-178000-m04 status: &{Name:ha-178000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-178000 --wait=true -v=7 --alsologtostderr --driver=docker 
E0828 10:18:43.228381    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
E0828 10:19:10.932415    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-178000 --wait=true -v=7 --alsologtostderr --driver=docker : (1m20.974720852s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (36.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-178000 --control-plane -v=7 --alsologtostderr
E0828 10:19:45.253976    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/addons-376000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-178000 --control-plane -v=7 --alsologtostderr: (35.340307449s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-178000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (36.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.67s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-554000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-554000 --driver=docker : (20.757054282s)
--- PASS: TestImageBuild/serial/Setup (20.76s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-554000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-554000: (1.890260004s)
--- PASS: TestImageBuild/serial/NormalBuild (1.89s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-554000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-554000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.6s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-554000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (59.73s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-493000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-493000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (59.727049855s)
--- PASS: TestJSONOutput/start/Command (59.73s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-493000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-493000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.61s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-493000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-493000 --output=json --user=testUser: (10.606130719s)
--- PASS: TestJSONOutput/stop/Command (10.61s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.58s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-273000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-273000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (360.012303ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b26f1eb3-ddae-4922-a165-eac8eed5713c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-273000] minikube v1.33.1 on Darwin 14.6.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d4efad71-82b0-46a2-8978-9f93c4cadbaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19529"}}
	{"specversion":"1.0","id":"296cd805-0b83-4678-b8cf-b1c9bf280d12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19529-1451/kubeconfig"}}
	{"specversion":"1.0","id":"6f0a21b7-7c34-474c-9934-724633ca470a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"3a56e479-79d8-4006-9213-3fe3f4debbb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7d426c11-111f-4174-88f4-af2858ce8d6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19529-1451/.minikube"}}
	{"specversion":"1.0","id":"e94733c5-8a7b-4b42-b4d1-8f64f846c055","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1454991b-3d8b-4e04-acaf-16e28ee57ece","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-273000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-273000
--- PASS: TestErrorJSONOutput (0.58s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (22.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-886000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-886000 --network=: (20.764891061s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-886000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-886000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-886000: (1.959138315s)
--- PASS: TestKicCustomNetwork/create_custom_network (22.74s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.66s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-916000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-916000 --network=bridge: (20.809341115s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-916000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-916000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-916000: (1.832010791s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.66s)

                                                
                                    
x
+
TestKicExistingNetwork (22.64s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-400000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-400000 --network=existing-network: (20.761010146s)
helpers_test.go:175: Cleaning up "existing-network-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-400000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-400000: (1.707421088s)
--- PASS: TestKicExistingNetwork (22.64s)

                                                
                                    
x
+
TestKicCustomSubnet (22.75s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-028000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-028000 --subnet=192.168.60.0/24: (20.878897134s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-028000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-028000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-028000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-028000: (1.853241229s)
--- PASS: TestKicCustomSubnet (22.75s)

                                                
                                    
x
+
TestKicStaticIP (23.24s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-127000 --static-ip=192.168.200.200
E0828 10:23:43.219318    1994 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19529-1451/.minikube/profiles/functional-425000/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-127000 --static-ip=192.168.200.200: (21.111799051s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-127000 ip
helpers_test.go:175: Cleaning up "static-ip-127000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-127000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-127000: (1.94216226s)
--- PASS: TestKicStaticIP (23.24s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (46.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-233000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-233000 --driver=docker : (21.149215752s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-234000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-234000 --driver=docker : (20.556458655s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-233000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-234000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-234000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-234000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-234000: (1.799087974s)
helpers_test.go:175: Cleaning up "first-233000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-233000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-233000: (1.984472349s)
--- PASS: TestMinikubeProfile (46.72s)

                                                
                                    

Test skip (15/175)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-376000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-376000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-376000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c1d1619f-3913-49ce-b14a-92834d1e92e8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c1d1619f-3913-49ce-b14a-92834d1e92e8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005472914s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-376000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.76s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-425000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-425000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-gcxpz" [e384a503-9950-4997-bcb5-714686dcc9f8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-gcxpz" [e384a503-9950-4997-bcb5-714686dcc9f8] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004651659s
functional_test.go:1646: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (11.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
Copied to clipboard