Test Report: Docker_Linux_docker_arm64 19616

                    
                      ead8b21730629246ae204938704f78710656bdeb:2024-09-12:36186
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 73.6
x
+
TestAddons/parallel/Registry (73.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.604183ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-k7dbs" [4a976b45-4ffe-45bb-bf8e-8235e03fda10] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004533931s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7zbh8" [ee258d2f-09b0-4915-82e1-123bba604752] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00422783s
addons_test.go:342: (dbg) Run:  kubectl --context addons-648158 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-648158 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-648158 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.108744092s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-648158 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-648158
helpers_test.go:235: (dbg) docker inspect addons-648158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d",
	        "Created": "2024-09-12T21:45:16.454512971Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1596049,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-12T21:45:16.577305683Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5a18b2e89815d9320db97822722b50bf88d564940d3d81fe93adf39e9c88570e",
	        "ResolvConfPath": "/var/lib/docker/containers/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d/hostname",
	        "HostsPath": "/var/lib/docker/containers/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d/hosts",
	        "LogPath": "/var/lib/docker/containers/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d/095af0a5b4842486a71822ec90804fa024087909c41b03b2ca5d01d479b58a9d-json.log",
	        "Name": "/addons-648158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-648158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-648158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e79c42ea07b06605f54026e352d2d531408dc3652bc95c766038aa549473b90f-init/diff:/var/lib/docker/overlay2/fbbc1fff48c3f03ea4a55053e2bf32977df83d1328f1e6f776215c001793c7bc/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e79c42ea07b06605f54026e352d2d531408dc3652bc95c766038aa549473b90f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e79c42ea07b06605f54026e352d2d531408dc3652bc95c766038aa549473b90f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e79c42ea07b06605f54026e352d2d531408dc3652bc95c766038aa549473b90f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-648158",
	                "Source": "/var/lib/docker/volumes/addons-648158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-648158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-648158",
	                "name.minikube.sigs.k8s.io": "addons-648158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9806d98886e0647b97e53e9d4920f70a5cd1f6fb56d19d2f1f17f1abf95b7040",
	            "SandboxKey": "/var/run/docker/netns/9806d98886e0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34330"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34331"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34334"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34332"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34333"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-648158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6d702ae16bd925e83694b0120b326ba837c5291976a7675f05fe20b814d3032c",
	                    "EndpointID": "725f79b339d0064018f1e21e8dc1a1ae1262ef510a2665dfcbd2e8944aaac933",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-648158",
	                        "095af0a5b484"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-648158 -n addons-648158
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-648158 logs -n 25: (1.222738712s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-658229   | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC |                     |
	|         | -p download-only-658229              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| delete  | -p download-only-658229              | download-only-658229   | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| start   | -o=json --download-only              | download-only-308645   | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC |                     |
	|         | -p download-only-308645              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| delete  | -p download-only-308645              | download-only-308645   | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| delete  | -p download-only-658229              | download-only-658229   | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| delete  | -p download-only-308645              | download-only-308645   | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| start   | --download-only -p                   | download-docker-565752 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC |                     |
	|         | download-docker-565752               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p download-docker-565752            | download-docker-565752 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| start   | --download-only -p                   | binary-mirror-696147   | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC |                     |
	|         | binary-mirror-696147                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42489               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-696147              | binary-mirror-696147   | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| addons  | enable dashboard -p                  | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC |                     |
	|         | addons-648158                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC |                     |
	|         | addons-648158                        |                        |         |         |                     |                     |
	| start   | -p addons-648158 --wait=true         | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:48 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=docker           |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	| addons  | addons-648158 addons disable         | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:49 UTC | 12 Sep 24 21:49 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-648158 addons                 | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:57 UTC | 12 Sep 24 21:58 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-648158 addons                 | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-648158 addons                 | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
	|         | addons-648158                        |                        |         |         |                     |                     |
	| ip      | addons-648158 ip                     | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
	| addons  | addons-648158 addons disable         | addons-648158          | jenkins | v1.34.0 | 12 Sep 24 21:58 UTC | 12 Sep 24 21:58 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:44:52
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:44:52.293426 1595550 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:44:52.293591 1595550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:44:52.293604 1595550 out.go:358] Setting ErrFile to fd 2...
	I0912 21:44:52.293611 1595550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:44:52.293853 1595550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
	I0912 21:44:52.294286 1595550 out.go:352] Setting JSON to false
	I0912 21:44:52.295170 1595550 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23233,"bootTime":1726154260,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0912 21:44:52.295245 1595550 start.go:139] virtualization:  
	I0912 21:44:52.297255 1595550 out.go:177] * [addons-648158] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0912 21:44:52.298472 1595550 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 21:44:52.298524 1595550 notify.go:220] Checking for updates...
	I0912 21:44:52.301446 1595550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:44:52.302951 1595550 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	I0912 21:44:52.304264 1595550 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	I0912 21:44:52.305594 1595550 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0912 21:44:52.306977 1595550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:44:52.308309 1595550 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:44:52.329697 1595550 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 21:44:52.329815 1595550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:44:52.393221 1595550 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 21:44:52.38325617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:44:52.393338 1595550 docker.go:318] overlay module found
	I0912 21:44:52.394811 1595550 out.go:177] * Using the docker driver based on user configuration
	I0912 21:44:52.396076 1595550 start.go:297] selected driver: docker
	I0912 21:44:52.396092 1595550 start.go:901] validating driver "docker" against <nil>
	I0912 21:44:52.396107 1595550 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:44:52.396759 1595550 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:44:52.455832 1595550 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 21:44:52.446732647 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:44:52.455997 1595550 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:44:52.456234 1595550 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:44:52.457440 1595550 out.go:177] * Using Docker driver with root privileges
	I0912 21:44:52.458509 1595550 cni.go:84] Creating CNI manager for ""
	I0912 21:44:52.458534 1595550 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:44:52.458544 1595550 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0912 21:44:52.458621 1595550 start.go:340] cluster config:
	{Name:addons-648158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:44:52.460430 1595550 out.go:177] * Starting "addons-648158" primary control-plane node in "addons-648158" cluster
	I0912 21:44:52.461548 1595550 cache.go:121] Beginning downloading kic base image for docker with docker
	I0912 21:44:52.462770 1595550 out.go:177] * Pulling base image v0.0.45-1726156396-19616 ...
	I0912 21:44:52.464061 1595550 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:44:52.464109 1595550 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0912 21:44:52.464120 1595550 cache.go:56] Caching tarball of preloaded images
	I0912 21:44:52.464126 1595550 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 21:44:52.464198 1595550 preload.go:172] Found /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0912 21:44:52.464208 1595550 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0912 21:44:52.464588 1595550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/config.json ...
	I0912 21:44:52.464616 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/config.json: {Name:mk39e0bed83dea5ddf12769e075879530e448b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:52.478580 1595550 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:44:52.478708 1595550 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 21:44:52.478731 1595550 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory, skipping pull
	I0912 21:44:52.478736 1595550 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 exists in cache, skipping pull
	I0912 21:44:52.478748 1595550 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 as a tarball
	I0912 21:44:52.478757 1595550 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from local cache
	I0912 21:45:09.501036 1595550 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 from cached tarball
	I0912 21:45:09.501078 1595550 cache.go:194] Successfully downloaded all kic artifacts
	I0912 21:45:09.501125 1595550 start.go:360] acquireMachinesLock for addons-648158: {Name:mkf47fbdfabd638c92e4e58b5d8a772d37a8e926 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:45:09.501258 1595550 start.go:364] duration metric: took 108.765µs to acquireMachinesLock for "addons-648158"
	I0912 21:45:09.501293 1595550 start.go:93] Provisioning new machine with config: &{Name:addons-648158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:45:09.501373 1595550 start.go:125] createHost starting for "" (driver="docker")
	I0912 21:45:09.502918 1595550 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0912 21:45:09.503157 1595550 start.go:159] libmachine.API.Create for "addons-648158" (driver="docker")
	I0912 21:45:09.503192 1595550 client.go:168] LocalClient.Create starting
	I0912 21:45:09.503326 1595550 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem
	I0912 21:45:10.436330 1595550 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/cert.pem
	I0912 21:45:10.626763 1595550 cli_runner.go:164] Run: docker network inspect addons-648158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 21:45:10.644364 1595550 cli_runner.go:211] docker network inspect addons-648158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 21:45:10.644467 1595550 network_create.go:284] running [docker network inspect addons-648158] to gather additional debugging logs...
	I0912 21:45:10.644489 1595550 cli_runner.go:164] Run: docker network inspect addons-648158
	W0912 21:45:10.668333 1595550 cli_runner.go:211] docker network inspect addons-648158 returned with exit code 1
	I0912 21:45:10.668367 1595550 network_create.go:287] error running [docker network inspect addons-648158]: docker network inspect addons-648158: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-648158 not found
	I0912 21:45:10.668391 1595550 network_create.go:289] output of [docker network inspect addons-648158]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-648158 not found
	
	** /stderr **
	I0912 21:45:10.668513 1595550 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:45:10.686214 1595550 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017cd350}
	I0912 21:45:10.686265 1595550 network_create.go:124] attempt to create docker network addons-648158 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0912 21:45:10.686328 1595550 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-648158 addons-648158
	I0912 21:45:10.754882 1595550 network_create.go:108] docker network addons-648158 192.168.49.0/24 created
	I0912 21:45:10.754915 1595550 kic.go:121] calculated static IP "192.168.49.2" for the "addons-648158" container
	I0912 21:45:10.755010 1595550 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 21:45:10.770554 1595550 cli_runner.go:164] Run: docker volume create addons-648158 --label name.minikube.sigs.k8s.io=addons-648158 --label created_by.minikube.sigs.k8s.io=true
	I0912 21:45:10.787372 1595550 oci.go:103] Successfully created a docker volume addons-648158
	I0912 21:45:10.787472 1595550 cli_runner.go:164] Run: docker run --rm --name addons-648158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-648158 --entrypoint /usr/bin/test -v addons-648158:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib
	I0912 21:45:12.757080 1595550 cli_runner.go:217] Completed: docker run --rm --name addons-648158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-648158 --entrypoint /usr/bin/test -v addons-648158:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -d /var/lib: (1.969524908s)
	I0912 21:45:12.757110 1595550 oci.go:107] Successfully prepared a docker volume addons-648158
	I0912 21:45:12.757132 1595550 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:45:12.757152 1595550 kic.go:194] Starting extracting preloaded images to volume ...
	I0912 21:45:12.757240 1595550 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-648158:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 21:45:16.386032 1595550 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-648158:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 -I lz4 -xf /preloaded.tar -C /extractDir: (3.628751783s)
	I0912 21:45:16.386076 1595550 kic.go:203] duration metric: took 3.628909301s to extract preloaded images to volume ...
	W0912 21:45:16.386235 1595550 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0912 21:45:16.386347 1595550 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 21:45:16.440109 1595550 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-648158 --name addons-648158 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-648158 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-648158 --network addons-648158 --ip 192.168.49.2 --volume addons-648158:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889
	I0912 21:45:16.738593 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Running}}
	I0912 21:45:16.760281 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:16.783349 1595550 cli_runner.go:164] Run: docker exec addons-648158 stat /var/lib/dpkg/alternatives/iptables
	I0912 21:45:16.863499 1595550 oci.go:144] the created container "addons-648158" has a running status.
	I0912 21:45:16.863532 1595550 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa...
	I0912 21:45:17.481383 1595550 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 21:45:17.512071 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:17.531116 1595550 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 21:45:17.531135 1595550 kic_runner.go:114] Args: [docker exec --privileged addons-648158 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 21:45:17.600958 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:17.620855 1595550 machine.go:93] provisionDockerMachine start ...
	I0912 21:45:17.620940 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:17.640331 1595550 main.go:141] libmachine: Using SSH client type: native
	I0912 21:45:17.640736 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34330 <nil> <nil>}
	I0912 21:45:17.640749 1595550 main.go:141] libmachine: About to run SSH command:
	hostname
	I0912 21:45:17.796831 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-648158
	
	I0912 21:45:17.796905 1595550 ubuntu.go:169] provisioning hostname "addons-648158"
	I0912 21:45:17.797060 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:17.816878 1595550 main.go:141] libmachine: Using SSH client type: native
	I0912 21:45:17.817148 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34330 <nil> <nil>}
	I0912 21:45:17.817169 1595550 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-648158 && echo "addons-648158" | sudo tee /etc/hostname
	I0912 21:45:17.969507 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-648158
	
	I0912 21:45:17.969638 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:17.986753 1595550 main.go:141] libmachine: Using SSH client type: native
	I0912 21:45:17.986999 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34330 <nil> <nil>}
	I0912 21:45:17.987021 1595550 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-648158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-648158/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-648158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:45:18.129467 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:45:18.129492 1595550 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19616-1589418/.minikube CaCertPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19616-1589418/.minikube}
	I0912 21:45:18.129510 1595550 ubuntu.go:177] setting up certificates
	I0912 21:45:18.129520 1595550 provision.go:84] configureAuth start
	I0912 21:45:18.129583 1595550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-648158
	I0912 21:45:18.145625 1595550 provision.go:143] copyHostCerts
	I0912 21:45:18.145718 1595550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.pem (1082 bytes)
	I0912 21:45:18.145853 1595550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19616-1589418/.minikube/cert.pem (1123 bytes)
	I0912 21:45:18.145926 1595550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19616-1589418/.minikube/key.pem (1679 bytes)
	I0912 21:45:18.145991 1595550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca-key.pem org=jenkins.addons-648158 san=[127.0.0.1 192.168.49.2 addons-648158 localhost minikube]
	I0912 21:45:18.407925 1595550 provision.go:177] copyRemoteCerts
	I0912 21:45:18.408005 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:45:18.408050 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:18.425554 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:18.526748 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:45:18.552750 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0912 21:45:18.575890 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 21:45:18.599170 1595550 provision.go:87] duration metric: took 469.636361ms to configureAuth
	I0912 21:45:18.599197 1595550 ubuntu.go:193] setting minikube options for container-runtime
	I0912 21:45:18.599388 1595550 config.go:182] Loaded profile config "addons-648158": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:45:18.599439 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:18.616470 1595550 main.go:141] libmachine: Using SSH client type: native
	I0912 21:45:18.616717 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34330 <nil> <nil>}
	I0912 21:45:18.616735 1595550 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0912 21:45:18.757570 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0912 21:45:18.757634 1595550 ubuntu.go:71] root file system type: overlay
	I0912 21:45:18.757762 1595550 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0912 21:45:18.757832 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:18.774755 1595550 main.go:141] libmachine: Using SSH client type: native
	I0912 21:45:18.775011 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34330 <nil> <nil>}
	I0912 21:45:18.775100 1595550 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0912 21:45:18.926075 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0912 21:45:18.926206 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:18.943690 1595550 main.go:141] libmachine: Using SSH client type: native
	I0912 21:45:18.943950 1595550 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ebfd0] 0x3ee830 <nil>  [] 0s} 127.0.0.1 34330 <nil> <nil>}
	I0912 21:45:18.943974 1595550 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0912 21:45:19.713569 1595550 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-12 21:45:18.917186149 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0912 21:45:19.713599 1595550 machine.go:96] duration metric: took 2.092725054s to provisionDockerMachine
	I0912 21:45:19.713614 1595550 client.go:171] duration metric: took 10.210408914s to LocalClient.Create
	I0912 21:45:19.713626 1595550 start.go:167] duration metric: took 10.210469737s to libmachine.API.Create "addons-648158"
	I0912 21:45:19.713637 1595550 start.go:293] postStartSetup for "addons-648158" (driver="docker")
	I0912 21:45:19.713651 1595550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:45:19.713720 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:45:19.713769 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:19.733297 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:19.830114 1595550 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:45:19.833325 1595550 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 21:45:19.833411 1595550 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 21:45:19.833428 1595550 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 21:45:19.833436 1595550 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0912 21:45:19.833448 1595550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1589418/.minikube/addons for local assets ...
	I0912 21:45:19.833535 1595550 filesync.go:126] Scanning /home/jenkins/minikube-integration/19616-1589418/.minikube/files for local assets ...
	I0912 21:45:19.833561 1595550 start.go:296] duration metric: took 119.914562ms for postStartSetup
	I0912 21:45:19.833889 1595550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-648158
	I0912 21:45:19.853376 1595550 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/config.json ...
	I0912 21:45:19.853657 1595550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 21:45:19.853719 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:19.870323 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:19.965999 1595550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 21:45:19.970590 1595550 start.go:128] duration metric: took 10.469199921s to createHost
	I0912 21:45:19.970613 1595550 start.go:83] releasing machines lock for "addons-648158", held for 10.46934336s
	I0912 21:45:19.970684 1595550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-648158
	I0912 21:45:19.987734 1595550 ssh_runner.go:195] Run: cat /version.json
	I0912 21:45:19.987793 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:19.988088 1595550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:45:19.988151 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:20.013682 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:20.029638 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:20.247428 1595550 ssh_runner.go:195] Run: systemctl --version
	I0912 21:45:20.251674 1595550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 21:45:20.255879 1595550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0912 21:45:20.281724 1595550 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0912 21:45:20.281806 1595550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:45:20.313660 1595550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0912 21:45:20.313688 1595550 start.go:495] detecting cgroup driver to use...
	I0912 21:45:20.313723 1595550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0912 21:45:20.313826 1595550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:45:20.329940 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0912 21:45:20.340341 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0912 21:45:20.350162 1595550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0912 21:45:20.350241 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0912 21:45:20.359880 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:45:20.369781 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0912 21:45:20.379531 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0912 21:45:20.389728 1595550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:45:20.398677 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0912 21:45:20.408119 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0912 21:45:20.417909 1595550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0912 21:45:20.427500 1595550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:45:20.436059 1595550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:45:20.444410 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:45:20.529407 1595550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0912 21:45:20.622851 1595550 start.go:495] detecting cgroup driver to use...
	I0912 21:45:20.622904 1595550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0912 21:45:20.622975 1595550 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0912 21:45:20.636940 1595550 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0912 21:45:20.637067 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0912 21:45:20.651183 1595550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:45:20.674013 1595550 ssh_runner.go:195] Run: which cri-dockerd
	I0912 21:45:20.678835 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0912 21:45:20.687752 1595550 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0912 21:45:20.713613 1595550 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0912 21:45:20.818050 1595550 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0912 21:45:20.921095 1595550 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0912 21:45:20.921293 1595550 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0912 21:45:20.942326 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:45:21.042133 1595550 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0912 21:45:21.317629 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0912 21:45:21.329792 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:45:21.341876 1595550 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0912 21:45:21.433675 1595550 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0912 21:45:21.531235 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:45:21.616716 1595550 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0912 21:45:21.630833 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0912 21:45:21.641937 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:45:21.726283 1595550 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0912 21:45:21.800039 1595550 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0912 21:45:21.800196 1595550 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0912 21:45:21.804546 1595550 start.go:563] Will wait 60s for crictl version
	I0912 21:45:21.804608 1595550 ssh_runner.go:195] Run: which crictl
	I0912 21:45:21.808131 1595550 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:45:21.846623 1595550 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0912 21:45:21.846763 1595550 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 21:45:21.869582 1595550 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0912 21:45:21.896391 1595550 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0912 21:45:21.896558 1595550 cli_runner.go:164] Run: docker network inspect addons-648158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:45:21.912634 1595550 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0912 21:45:21.916282 1595550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:45:21.927505 1595550 kubeadm.go:883] updating cluster {Name:addons-648158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0912 21:45:21.927629 1595550 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0912 21:45:21.927693 1595550 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 21:45:21.946232 1595550 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 21:45:21.946254 1595550 docker.go:615] Images already preloaded, skipping extraction
	I0912 21:45:21.946321 1595550 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0912 21:45:21.963792 1595550 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0912 21:45:21.963818 1595550 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:45:21.963837 1595550 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0912 21:45:21.963939 1595550 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-648158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0912 21:45:21.964010 1595550 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0912 21:45:22.020002 1595550 cni.go:84] Creating CNI manager for ""
	I0912 21:45:22.020035 1595550 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:45:22.020046 1595550 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0912 21:45:22.020087 1595550 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-648158 NodeName:addons-648158 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:45:22.020284 1595550 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-648158"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:45:22.020365 1595550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0912 21:45:22.029600 1595550 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:45:22.029678 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:45:22.038688 1595550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0912 21:45:22.057249 1595550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:45:22.075395 1595550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0912 21:45:22.093938 1595550 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0912 21:45:22.097624 1595550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:45:22.108291 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:45:22.196420 1595550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:45:22.210471 1595550 certs.go:68] Setting up /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158 for IP: 192.168.49.2
	I0912 21:45:22.210536 1595550 certs.go:194] generating shared ca certs ...
	I0912 21:45:22.210567 1595550 certs.go:226] acquiring lock for ca certs: {Name:mkbf22811db03e42b0f0c081454eb3f99708b183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:22.211317 1595550 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.key
	I0912 21:45:22.433480 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.crt ...
	I0912 21:45:22.433513 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.crt: {Name:mk72e5f935fec294e69009cf4aea31435c70e4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:22.433737 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.key ...
	I0912 21:45:22.433751 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.key: {Name:mkb6385cc4d730e4d7a49f02cefcaae4249d85d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:22.434255 1595550 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.key
	I0912 21:45:22.942358 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.crt ...
	I0912 21:45:22.942388 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.crt: {Name:mk450a2530aa8953153326429aca610c57afd125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:22.942581 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.key ...
	I0912 21:45:22.942604 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.key: {Name:mk66c0ef036ecfd03ff400c618ae21c60bb0c60a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:22.943068 1595550 certs.go:256] generating profile certs ...
	I0912 21:45:22.943139 1595550 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.key
	I0912 21:45:22.943163 1595550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt with IP's: []
	I0912 21:45:23.402774 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt ...
	I0912 21:45:23.402807 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: {Name:mk1f2498fc67c90097a8f66b5054399b23fe170f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:23.403479 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.key ...
	I0912 21:45:23.403495 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.key: {Name:mkdfcc07aa074ee1904526cb21a167c2a8cecfd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:23.403596 1595550 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key.71e32bb0
	I0912 21:45:23.403618 1595550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt.71e32bb0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0912 21:45:23.734010 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt.71e32bb0 ...
	I0912 21:45:23.734040 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt.71e32bb0: {Name:mk4f0d9173e6a442fd07c768c295ccf81e51b6d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:23.734700 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key.71e32bb0 ...
	I0912 21:45:23.734717 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key.71e32bb0: {Name:mk2326a38e6bc7100b28b1adaef28869bbabcc2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:23.734811 1595550 certs.go:381] copying /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt.71e32bb0 -> /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt
	I0912 21:45:23.734896 1595550 certs.go:385] copying /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key.71e32bb0 -> /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key
	I0912 21:45:23.734959 1595550 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.key
	I0912 21:45:23.734982 1595550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.crt with IP's: []
	I0912 21:45:24.311226 1595550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.crt ...
	I0912 21:45:24.311262 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.crt: {Name:mkd035209e7f3c86c91125474d5aebc2da916a20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:24.311469 1595550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.key ...
	I0912 21:45:24.311486 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.key: {Name:mk8af13efd87462a8c15a1a6061ce1153fe9fa6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:24.312124 1595550 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 21:45:24.312171 1595550 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:45:24.312202 1595550 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:45:24.312231 1595550 certs.go:484] found cert: /home/jenkins/minikube-integration/19616-1589418/.minikube/certs/key.pem (1679 bytes)
	I0912 21:45:24.312916 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:45:24.338386 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:45:24.362685 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:45:24.386838 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 21:45:24.410216 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0912 21:45:24.433520 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 21:45:24.457173 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:45:24.480614 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:45:24.504138 1595550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19616-1589418/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:45:24.527993 1595550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:45:24.545457 1595550 ssh_runner.go:195] Run: openssl version
	I0912 21:45:24.550812 1595550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:45:24.560312 1595550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:45:24.563830 1595550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:45 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:45:24.563896 1595550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:45:24.570573 1595550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:45:24.579843 1595550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0912 21:45:24.583111 1595550 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0912 21:45:24.583160 1595550 kubeadm.go:392] StartCluster: {Name:addons-648158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-648158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:45:24.583283 1595550 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0912 21:45:24.615777 1595550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:45:24.625563 1595550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:45:24.634044 1595550 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0912 21:45:24.634109 1595550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:45:24.643924 1595550 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:45:24.643945 1595550 kubeadm.go:157] found existing configuration files:
	
	I0912 21:45:24.643995 1595550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 21:45:24.652876 1595550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0912 21:45:24.652942 1595550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0912 21:45:24.661193 1595550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 21:45:24.670699 1595550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0912 21:45:24.670772 1595550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0912 21:45:24.679137 1595550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 21:45:24.687316 1595550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0912 21:45:24.687381 1595550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 21:45:24.695484 1595550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 21:45:24.704182 1595550 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0912 21:45:24.704280 1595550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 21:45:24.712442 1595550 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 21:45:24.754741 1595550 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0912 21:45:24.754802 1595550 kubeadm.go:310] [preflight] Running pre-flight checks
	I0912 21:45:24.777126 1595550 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0912 21:45:24.777202 1595550 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0912 21:45:24.777239 1595550 kubeadm.go:310] OS: Linux
	I0912 21:45:24.777287 1595550 kubeadm.go:310] CGROUPS_CPU: enabled
	I0912 21:45:24.777338 1595550 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0912 21:45:24.777388 1595550 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0912 21:45:24.777438 1595550 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0912 21:45:24.777488 1595550 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0912 21:45:24.777539 1595550 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0912 21:45:24.777585 1595550 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0912 21:45:24.777636 1595550 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0912 21:45:24.777683 1595550 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0912 21:45:24.836755 1595550 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:45:24.836867 1595550 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:45:24.837004 1595550 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0912 21:45:24.848565 1595550 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:45:24.854378 1595550 out.go:235]   - Generating certificates and keys ...
	I0912 21:45:24.854489 1595550 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0912 21:45:24.854580 1595550 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0912 21:45:24.970260 1595550 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:45:25.558010 1595550 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:45:26.640945 1595550 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:45:26.867119 1595550 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0912 21:45:27.247898 1595550 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0912 21:45:27.248175 1595550 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-648158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:45:27.856395 1595550 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0912 21:45:27.856614 1595550 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-648158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:45:27.962437 1595550 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:45:29.036280 1595550 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:45:30.133858 1595550 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0912 21:45:30.134163 1595550 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:45:30.381887 1595550 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:45:30.620823 1595550 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0912 21:45:30.867160 1595550 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:45:31.507155 1595550 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:45:32.263640 1595550 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:45:32.264398 1595550 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:45:32.267495 1595550 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:45:32.271086 1595550 out.go:235]   - Booting up control plane ...
	I0912 21:45:32.271192 1595550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:45:32.271268 1595550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:45:32.272917 1595550 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:45:32.284203 1595550 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:45:32.290357 1595550 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:45:32.290680 1595550 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0912 21:45:32.405529 1595550 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0912 21:45:32.405648 1595550 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0912 21:45:34.404315 1595550 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.000799788s
	I0912 21:45:34.404421 1595550 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0912 21:45:40.406409 1595550 kubeadm.go:310] [api-check] The API server is healthy after 6.002087447s
	I0912 21:45:40.425225 1595550 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:45:40.442830 1595550 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:45:40.465774 1595550 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:45:40.465966 1595550 kubeadm.go:310] [mark-control-plane] Marking the node addons-648158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:45:40.476221 1595550 kubeadm.go:310] [bootstrap-token] Using token: xaukdn.izn2qramjufoi8qt
	I0912 21:45:40.478979 1595550 out.go:235]   - Configuring RBAC rules ...
	I0912 21:45:40.479107 1595550 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:45:40.483534 1595550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:45:40.493696 1595550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:45:40.498584 1595550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:45:40.503705 1595550 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:45:40.507697 1595550 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:45:40.812885 1595550 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:45:41.242035 1595550 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0912 21:45:41.813576 1595550 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0912 21:45:41.814549 1595550 kubeadm.go:310] 
	I0912 21:45:41.814632 1595550 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0912 21:45:41.814644 1595550 kubeadm.go:310] 
	I0912 21:45:41.814719 1595550 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0912 21:45:41.814731 1595550 kubeadm.go:310] 
	I0912 21:45:41.814766 1595550 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0912 21:45:41.814827 1595550 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:45:41.814881 1595550 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:45:41.814894 1595550 kubeadm.go:310] 
	I0912 21:45:41.814947 1595550 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0912 21:45:41.814963 1595550 kubeadm.go:310] 
	I0912 21:45:41.815010 1595550 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:45:41.815019 1595550 kubeadm.go:310] 
	I0912 21:45:41.815069 1595550 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0912 21:45:41.815144 1595550 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:45:41.815216 1595550 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:45:41.815226 1595550 kubeadm.go:310] 
	I0912 21:45:41.815307 1595550 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:45:41.815393 1595550 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0912 21:45:41.815452 1595550 kubeadm.go:310] 
	I0912 21:45:41.815537 1595550 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token xaukdn.izn2qramjufoi8qt \
	I0912 21:45:41.815641 1595550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:889bf8e7f7fa68711c5600116be7317db5666eb96597e491e0dfca9010b6a355 \
	I0912 21:45:41.815665 1595550 kubeadm.go:310] 	--control-plane 
	I0912 21:45:41.815674 1595550 kubeadm.go:310] 
	I0912 21:45:41.815768 1595550 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:45:41.815779 1595550 kubeadm.go:310] 
	I0912 21:45:41.819215 1595550 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token xaukdn.izn2qramjufoi8qt \
	I0912 21:45:41.819329 1595550 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:889bf8e7f7fa68711c5600116be7317db5666eb96597e491e0dfca9010b6a355 
	I0912 21:45:41.819628 1595550 kubeadm.go:310] W0912 21:45:24.751449    1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:45:41.819905 1595550 kubeadm.go:310] W0912 21:45:24.752316    1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0912 21:45:41.820110 1595550 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0912 21:45:41.820213 1595550 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:45:41.820231 1595550 cni.go:84] Creating CNI manager for ""
	I0912 21:45:41.820246 1595550 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0912 21:45:41.823242 1595550 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0912 21:45:41.825877 1595550 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0912 21:45:41.835675 1595550 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0912 21:45:41.858132 1595550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:45:41.858271 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:41.858366 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-648158 minikube.k8s.io/updated_at=2024_09_12T21_45_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8 minikube.k8s.io/name=addons-648158 minikube.k8s.io/primary=true
	I0912 21:45:41.992266 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:41.992334 1595550 ops.go:34] apiserver oom_adj: -16
	I0912 21:45:42.493155 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:42.992387 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:43.492437 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:43.992931 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:44.493137 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:44.992445 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:45.493002 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:45.992849 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:46.492326 1595550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:45:46.635777 1595550 kubeadm.go:1113] duration metric: took 4.777556205s to wait for elevateKubeSystemPrivileges
	I0912 21:45:46.635803 1595550 kubeadm.go:394] duration metric: took 22.052646368s to StartCluster
	I0912 21:45:46.635820 1595550 settings.go:142] acquiring lock: {Name:mke0a909d4fb4359a87942368342244776ea0df1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:46.635937 1595550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19616-1589418/kubeconfig
	I0912 21:45:46.636319 1595550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/kubeconfig: {Name:mk5c78d80e4776a3c25d7663bf634139150573f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:45:46.636970 1595550 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0912 21:45:46.637094 1595550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:45:46.637340 1595550 config.go:182] Loaded profile config "addons-648158": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:45:46.637369 1595550 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0912 21:45:46.637439 1595550 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-648158"
	I0912 21:45:46.637450 1595550 addons.go:69] Setting gcp-auth=true in profile "addons-648158"
	I0912 21:45:46.637475 1595550 mustload.go:65] Loading cluster: addons-648158
	I0912 21:45:46.637488 1595550 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-648158"
	I0912 21:45:46.637558 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.637576 1595550 config.go:182] Loaded profile config "addons-648158": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 21:45:46.637893 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.638087 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.638489 1595550 addons.go:69] Setting ingress=true in profile "addons-648158"
	I0912 21:45:46.638519 1595550 addons.go:234] Setting addon ingress=true in "addons-648158"
	I0912 21:45:46.638554 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.638952 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.642940 1595550 addons.go:69] Setting cloud-spanner=true in profile "addons-648158"
	I0912 21:45:46.642982 1595550 addons.go:234] Setting addon cloud-spanner=true in "addons-648158"
	I0912 21:45:46.643029 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.643469 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.643977 1595550 addons.go:69] Setting ingress-dns=true in profile "addons-648158"
	I0912 21:45:46.644027 1595550 addons.go:234] Setting addon ingress-dns=true in "addons-648158"
	I0912 21:45:46.644121 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.644638 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.644832 1595550 addons.go:69] Setting inspektor-gadget=true in profile "addons-648158"
	I0912 21:45:46.663106 1595550 addons.go:234] Setting addon inspektor-gadget=true in "addons-648158"
	I0912 21:45:46.663169 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.663663 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.637446 1595550 addons.go:69] Setting default-storageclass=true in profile "addons-648158"
	I0912 21:45:46.673344 1595550 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-648158"
	I0912 21:45:46.673803 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.644842 1595550 addons.go:69] Setting metrics-server=true in profile "addons-648158"
	I0912 21:45:46.674033 1595550 addons.go:234] Setting addon metrics-server=true in "addons-648158"
	I0912 21:45:46.674076 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.674947 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.644846 1595550 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-648158"
	I0912 21:45:46.676648 1595550 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-648158"
	I0912 21:45:46.681317 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.644849 1595550 addons.go:69] Setting registry=true in profile "addons-648158"
	I0912 21:45:46.697266 1595550 addons.go:234] Setting addon registry=true in "addons-648158"
	I0912 21:45:46.644852 1595550 addons.go:69] Setting storage-provisioner=true in profile "addons-648158"
	I0912 21:45:46.644856 1595550 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-648158"
	I0912 21:45:46.644859 1595550 addons.go:69] Setting volcano=true in profile "addons-648158"
	I0912 21:45:46.644864 1595550 addons.go:69] Setting volumesnapshots=true in profile "addons-648158"
	I0912 21:45:46.644889 1595550 out.go:177] * Verifying Kubernetes components...
	I0912 21:45:46.637440 1595550 addons.go:69] Setting yakd=true in profile "addons-648158"
	I0912 21:45:46.697837 1595550 addons.go:234] Setting addon yakd=true in "addons-648158"
	I0912 21:45:46.697990 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.698617 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.715090 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.715263 1595550 addons.go:234] Setting addon storage-provisioner=true in "addons-648158"
	I0912 21:45:46.715316 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.715775 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.715235 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.722724 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.736754 1595550 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-648158"
	I0912 21:45:46.737206 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.752406 1595550 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:45:46.753374 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.754588 1595550 addons.go:234] Setting addon volcano=true in "addons-648158"
	I0912 21:45:46.754661 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.755134 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.776675 1595550 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:45:46.779444 1595550 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:45:46.779653 1595550 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0912 21:45:46.783042 1595550 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:45:46.783275 1595550 addons.go:234] Setting addon volumesnapshots=true in "addons-648158"
	I0912 21:45:46.783318 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.783785 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.804667 1595550 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:45:46.804931 1595550 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:45:46.810164 1595550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:45:46.810835 1595550 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:45:46.810858 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0912 21:45:46.810930 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:46.816313 1595550 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:45:46.823996 1595550 addons.go:234] Setting addon default-storageclass=true in "addons-648158"
	I0912 21:45:46.824040 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:46.824474 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:46.830021 1595550 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0912 21:45:46.831617 1595550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:45:46.833806 1595550 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:45:46.834934 1595550 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:45:46.834949 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:45:46.835007 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:46.851420 1595550 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0912 21:45:46.852190 1595550 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:45:46.856514 1595550 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0912 21:45:46.862059 1595550 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:45:46.862289 1595550 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:45:46.862302 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 21:45:46.862367 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:46.872707 1595550 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:45:46.873245 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0912 21:45:46.873340 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:46.886842 1595550 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0912 21:45:46.889404 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:45:46.889436 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:45:46.889502 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:46.890363 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:45:46.890381 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:45:46.890447 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:46.910780 1595550 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0912 21:45:46.913786 1595550 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:45:46.913808 1595550 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:45:46.913869 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:46.933215 1595550 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0912 21:45:46.942426 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0912 21:45:46.942456 1595550 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0912 21:45:46.942529 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:47.039988 1595550 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-648158"
	I0912 21:45:47.040035 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:47.040466 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:47.044449 1595550 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:45:47.049716 1595550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:45:47.049738 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:45:47.049806 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:47.063165 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.068041 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.072482 1595550 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0912 21:45:47.075113 1595550 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0912 21:45:47.077648 1595550 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0912 21:45:47.080314 1595550 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0912 21:45:47.093545 1595550 out.go:177]   - Using image docker.io/registry:2.8.3
	I0912 21:45:47.094720 1595550 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:45:47.094744 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0912 21:45:47.094814 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:47.093545 1595550 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:45:47.101208 1595550 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:45:47.101229 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0912 21:45:47.101298 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:47.111850 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:45:47.111874 1595550 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:45:47.111948 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:47.115046 1595550 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:45:47.115067 1595550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:45:47.115129 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:47.135677 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.141516 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.153737 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.163808 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.176279 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.197406 1595550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0912 21:45:47.217509 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.261305 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.261830 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.283390 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.284376 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.287151 1595550 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0912 21:45:47.289672 1595550 out.go:177]   - Using image docker.io/busybox:stable
	I0912 21:45:47.292349 1595550 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:45:47.292370 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0912 21:45:47.292451 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:47.292590 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.332667 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:47.872884 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0912 21:45:48.008211 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:45:48.008251 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:45:48.069924 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:45:48.077075 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:45:48.141661 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0912 21:45:48.141698 1595550 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0912 21:45:48.177696 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:45:48.217187 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:45:48.326485 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:45:48.329657 1595550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:45:48.329694 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:45:48.384153 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0912 21:45:48.394680 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:45:48.394712 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:45:48.467347 1595550 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:45:48.467376 1595550 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:45:48.477875 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:45:48.477917 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:45:48.492289 1595550 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:45:48.492326 1595550 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:45:48.513624 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:45:48.513652 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:45:48.518672 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0912 21:45:48.572420 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0912 21:45:48.572455 1595550 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0912 21:45:48.630419 1595550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:45:48.630463 1595550 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:45:48.693051 1595550 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:45:48.693078 1595550 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:45:48.724988 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:45:48.725030 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:45:48.750237 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:45:48.750265 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:45:48.755337 1595550 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:45:48.755367 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:45:48.802251 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0912 21:45:48.802280 1595550 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0912 21:45:48.831218 1595550 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:45:48.831246 1595550 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:45:48.884971 1595550 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:45:48.885000 1595550 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:45:49.056913 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:45:49.062027 1595550 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:45:49.062052 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0912 21:45:49.067749 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:45:49.067791 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:45:49.084666 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:45:49.084707 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:45:49.149870 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:45:49.149912 1595550 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:45:49.158809 1595550 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.961372848s)
	I0912 21:45:49.158920 1595550 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.325253507s)
	I0912 21:45:49.158939 1595550 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0912 21:45:49.159803 1595550 node_ready.go:35] waiting up to 6m0s for node "addons-648158" to be "Ready" ...
	I0912 21:45:49.169885 1595550 node_ready.go:49] node "addons-648158" has status "Ready":"True"
	I0912 21:45:49.169916 1595550 node_ready.go:38] duration metric: took 10.089341ms for node "addons-648158" to be "Ready" ...
	I0912 21:45:49.169928 1595550 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:45:49.179794 1595550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace to be "Ready" ...
	I0912 21:45:49.340728 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:45:49.433274 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0912 21:45:49.445216 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:45:49.445243 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:45:49.502705 1595550 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:45:49.502731 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:45:49.531290 1595550 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:45:49.531324 1595550 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:45:49.666835 1595550 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-648158" context rescaled to 1 replicas
	I0912 21:45:49.676177 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.803253843s)
	I0912 21:45:49.733977 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:45:49.734005 1595550 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:45:49.808722 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:45:49.890529 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:45:49.890555 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:45:49.959216 1595550 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:45:49.959241 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0912 21:45:50.173812 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:45:50.173838 1595550 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:45:50.254300 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:45:50.563795 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:45:50.563816 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:45:50.863267 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:45:50.863293 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 21:45:51.185871 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:45:51.240748 1595550 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:45:51.240785 1595550 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:45:52.066377 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:45:53.186197 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:45:53.768045 1595550 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:45:53.768128 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:53.809200 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:55.158917 1595550 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:45:55.187227 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:45:55.489207 1595550 addons.go:234] Setting addon gcp-auth=true in "addons-648158"
	I0912 21:45:55.489262 1595550 host.go:66] Checking if "addons-648158" exists ...
	I0912 21:45:55.489752 1595550 cli_runner.go:164] Run: docker container inspect addons-648158 --format={{.State.Status}}
	I0912 21:45:55.514559 1595550 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:45:55.514682 1595550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-648158
	I0912 21:45:55.540902 1595550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34330 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/addons-648158/id_rsa Username:docker}
	I0912 21:45:56.657675 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.479943666s)
	I0912 21:45:56.657705 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.331198572s)
	I0912 21:45:56.657675 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.580558405s)
	I0912 21:45:56.657693 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.440482256s)
	I0912 21:45:56.657789 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.587831708s)
	I0912 21:45:56.657799 1595550 addons.go:475] Verifying addon ingress=true in "addons-648158"
	I0912 21:45:56.660585 1595550 out.go:177] * Verifying ingress addon...
	I0912 21:45:56.664640 1595550 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 21:45:56.671855 1595550 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 21:45:56.671889 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:57.168903 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:57.669566 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:57.689481 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:45:58.202852 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:58.678169 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:59.202859 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.684152578s)
	I0912 21:45:59.203160 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.146215827s)
	I0912 21:45:59.203180 1595550 addons.go:475] Verifying addon registry=true in "addons-648158"
	I0912 21:45:59.203607 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.770300306s)
	I0912 21:45:59.203683 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.862785214s)
	I0912 21:45:59.203710 1595550 addons.go:475] Verifying addon metrics-server=true in "addons-648158"
	I0912 21:45:59.203763 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.394995568s)
	W0912 21:45:59.203808 1595550 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:45:59.203894 1595550 retry.go:31] will retry after 306.935082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:45:59.203864 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.949532392s)
	I0912 21:45:59.204038 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.819856592s)
	I0912 21:45:59.206074 1595550 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-648158 service yakd-dashboard -n yakd-dashboard
	
	I0912 21:45:59.206163 1595550 out.go:177] * Verifying registry addon...
	I0912 21:45:59.211064 1595550 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:45:59.308940 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:59.310302 1595550 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:45:59.310327 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:59.512001 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:45:59.684407 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:59.696641 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:45:59.779807 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:00.213565 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:00.222293 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:00.251848 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.185415255s)
	I0912 21:46:00.251892 1595550 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-648158"
	I0912 21:46:00.252133 1595550 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.737503729s)
	I0912 21:46:00.259459 1595550 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:46:00.259552 1595550 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0912 21:46:00.263836 1595550 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:46:00.282688 1595550 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0912 21:46:00.284894 1595550 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:46:00.284930 1595550 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:46:00.333711 1595550 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:46:00.333740 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:00.450555 1595550 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:46:00.450580 1595550 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:46:00.573429 1595550 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:46:00.573451 1595550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0912 21:46:00.616464 1595550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:46:00.672721 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:00.715671 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:00.769596 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:01.169508 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:01.216973 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:01.270322 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:01.670164 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:01.716696 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:01.769071 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:02.170852 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:02.187346 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:02.275283 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:02.276738 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:02.300908 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.788856331s)
	I0912 21:46:02.342769 1595550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.726219025s)
	I0912 21:46:02.345671 1595550 addons.go:475] Verifying addon gcp-auth=true in "addons-648158"
	I0912 21:46:02.348084 1595550 out.go:177] * Verifying gcp-auth addon...
	I0912 21:46:02.351170 1595550 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:46:02.370943 1595550 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:46:02.669230 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:02.714852 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:02.768634 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:03.169782 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:03.214599 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:03.268243 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:03.669759 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:03.715188 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:03.769370 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:04.171350 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:04.191783 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:04.214953 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:04.269214 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:04.668946 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:04.715207 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:04.770776 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:05.169262 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:05.215206 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:05.268593 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:05.668641 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:05.715425 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:05.770249 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:06.170386 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:06.215629 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:06.269355 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:06.668800 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:06.688037 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:06.714972 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:06.769475 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:07.169325 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:07.215212 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:07.269233 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:07.672036 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:07.715685 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:07.774420 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:08.170032 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:08.215564 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:08.268314 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:08.669618 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:08.714796 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:08.768650 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:09.169973 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:09.187926 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:09.215863 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:09.268950 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:09.669620 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:09.716013 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:09.769003 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:10.169937 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:10.215067 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:10.269048 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:10.668821 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:10.714410 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:10.773115 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:11.168809 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:11.215230 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:11.268765 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:11.670789 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:11.685972 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:11.716113 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:11.768763 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:12.168851 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:12.215987 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:12.270347 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:12.668949 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:12.715881 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:12.768439 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:13.169333 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:13.215069 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:13.268589 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:13.668890 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:13.686843 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:13.715472 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:13.768844 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:14.170019 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:14.215665 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:14.269288 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:14.668999 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:14.714473 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:14.769112 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:15.168559 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:15.215413 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:15.269984 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:15.696658 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:15.699239 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:15.739014 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:15.779270 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:16.172338 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:16.215279 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:16.269880 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:16.670206 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:16.715845 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:16.772154 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:17.169651 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:17.215651 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:17.268489 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:17.669469 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:17.714751 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:17.769087 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:18.171159 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:18.187735 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:18.215761 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:18.269911 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:18.670327 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:18.715637 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:18.768599 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:19.169726 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:19.215066 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:19.269796 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:19.669586 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:19.715135 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:19.769337 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:20.169970 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:20.215864 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:20.268771 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:20.670646 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:20.688145 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:20.714900 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:20.768814 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:21.169495 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:21.215329 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:21.269606 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:21.669579 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:21.714912 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:21.769067 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:22.170696 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:22.214950 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:22.269303 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:22.671169 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:22.691475 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:22.715487 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:22.770320 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:23.169651 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:23.215415 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:46:23.269524 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:23.670074 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:23.715521 1595550 kapi.go:107] duration metric: took 24.504452603s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:46:23.768357 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:24.169330 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:24.269176 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:24.674985 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:24.769813 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:25.188522 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:25.191069 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:25.269946 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:25.670013 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:25.772104 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:26.170038 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:26.268272 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:26.669085 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:26.768080 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:27.169270 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:27.269396 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:27.669466 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:27.686225 1595550 pod_ready.go:103] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"False"
	I0912 21:46:27.770233 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:28.169340 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:28.268743 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:28.669926 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:28.686206 1595550 pod_ready.go:93] pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace has status "Ready":"True"
	I0912 21:46:28.686235 1595550 pod_ready.go:82] duration metric: took 39.506350936s for pod "coredns-7c65d6cfc9-g2jtl" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.686246 1595550 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hrb9k" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.690071 1595550 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-hrb9k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hrb9k" not found
	I0912 21:46:28.690101 1595550 pod_ready.go:82] duration metric: took 3.847695ms for pod "coredns-7c65d6cfc9-hrb9k" in "kube-system" namespace to be "Ready" ...
	E0912 21:46:28.690113 1595550 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-hrb9k" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-hrb9k" not found
	I0912 21:46:28.690121 1595550 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-648158" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.702606 1595550 pod_ready.go:93] pod "etcd-addons-648158" in "kube-system" namespace has status "Ready":"True"
	I0912 21:46:28.702636 1595550 pod_ready.go:82] duration metric: took 12.507966ms for pod "etcd-addons-648158" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.702650 1595550 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-648158" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.713587 1595550 pod_ready.go:93] pod "kube-apiserver-addons-648158" in "kube-system" namespace has status "Ready":"True"
	I0912 21:46:28.713615 1595550 pod_ready.go:82] duration metric: took 10.95625ms for pod "kube-apiserver-addons-648158" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.713627 1595550 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-648158" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.723742 1595550 pod_ready.go:93] pod "kube-controller-manager-addons-648158" in "kube-system" namespace has status "Ready":"True"
	I0912 21:46:28.723766 1595550 pod_ready.go:82] duration metric: took 10.131851ms for pod "kube-controller-manager-addons-648158" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.723781 1595550 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q549p" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.768636 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:28.883832 1595550 pod_ready.go:93] pod "kube-proxy-q549p" in "kube-system" namespace has status "Ready":"True"
	I0912 21:46:28.883860 1595550 pod_ready.go:82] duration metric: took 160.070713ms for pod "kube-proxy-q549p" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:28.883873 1595550 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-648158" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:29.169634 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:29.269390 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:29.283773 1595550 pod_ready.go:93] pod "kube-scheduler-addons-648158" in "kube-system" namespace has status "Ready":"True"
	I0912 21:46:29.283800 1595550 pod_ready.go:82] duration metric: took 399.918515ms for pod "kube-scheduler-addons-648158" in "kube-system" namespace to be "Ready" ...
	I0912 21:46:29.283811 1595550 pod_ready.go:39] duration metric: took 40.113871028s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:46:29.283829 1595550 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:46:29.283892 1595550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:46:29.302139 1595550 api_server.go:72] duration metric: took 42.665131026s to wait for apiserver process to appear ...
	I0912 21:46:29.302218 1595550 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:46:29.302253 1595550 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0912 21:46:29.310840 1595550 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0912 21:46:29.311947 1595550 api_server.go:141] control plane version: v1.31.1
	I0912 21:46:29.311968 1595550 api_server.go:131] duration metric: took 9.730679ms to wait for apiserver health ...
	I0912 21:46:29.311978 1595550 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:46:29.490585 1595550 system_pods.go:59] 17 kube-system pods found
	I0912 21:46:29.490625 1595550 system_pods.go:61] "coredns-7c65d6cfc9-g2jtl" [b45d3244-e501-473d-a897-230dc34f1077] Running
	I0912 21:46:29.490635 1595550 system_pods.go:61] "csi-hostpath-attacher-0" [92ad0877-963f-48cf-9780-3322b096d442] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:46:29.490642 1595550 system_pods.go:61] "csi-hostpath-resizer-0" [c84263fa-c16e-4996-9c7d-4cd592123beb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:46:29.490652 1595550 system_pods.go:61] "csi-hostpathplugin-whsg5" [ce162d67-971a-4cda-bdab-18421fb38423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:46:29.490658 1595550 system_pods.go:61] "etcd-addons-648158" [9423960f-eaef-4887-b1a1-d85a94bdcf6b] Running
	I0912 21:46:29.490670 1595550 system_pods.go:61] "kube-apiserver-addons-648158" [5be520a4-b7d1-4092-b3c7-9763ac147461] Running
	I0912 21:46:29.490675 1595550 system_pods.go:61] "kube-controller-manager-addons-648158" [1e51a3ca-0473-4cb7-a8cf-e7ce80c5b580] Running
	I0912 21:46:29.490679 1595550 system_pods.go:61] "kube-ingress-dns-minikube" [d0dae086-4398-437b-b5b6-17b722bf7b0b] Running
	I0912 21:46:29.490686 1595550 system_pods.go:61] "kube-proxy-q549p" [1d5423b4-56c7-4981-a867-72374a2f1f7b] Running
	I0912 21:46:29.490690 1595550 system_pods.go:61] "kube-scheduler-addons-648158" [2491d207-d29b-4008-93bd-ac17186459f5] Running
	I0912 21:46:29.490696 1595550 system_pods.go:61] "metrics-server-84c5f94fbc-k2dzp" [eb6c8928-90e8-498f-9bc2-1e0d328da8dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:46:29.490708 1595550 system_pods.go:61] "nvidia-device-plugin-daemonset-z4pwc" [c1e3c33a-ac28-4943-aad4-27c2cbb14eef] Running
	I0912 21:46:29.490713 1595550 system_pods.go:61] "registry-66c9cd494c-k7dbs" [4a976b45-4ffe-45bb-bf8e-8235e03fda10] Running
	I0912 21:46:29.490717 1595550 system_pods.go:61] "registry-proxy-7zbh8" [ee258d2f-09b0-4915-82e1-123bba604752] Running
	I0912 21:46:29.490726 1595550 system_pods.go:61] "snapshot-controller-56fcc65765-qh9vh" [eedbd380-8ebf-4ee3-a5f8-b988ea320828] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:46:29.490739 1595550 system_pods.go:61] "snapshot-controller-56fcc65765-wh5dd" [836602c0-e62c-4016-8973-eba07bf5ac6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:46:29.490744 1595550 system_pods.go:61] "storage-provisioner" [7f4fc819-9ab7-484b-97c4-d3f1243ced5f] Running
	I0912 21:46:29.490751 1595550 system_pods.go:74] duration metric: took 178.767061ms to wait for pod list to return data ...
	I0912 21:46:29.490762 1595550 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:46:29.670975 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:29.689860 1595550 default_sa.go:45] found service account: "default"
	I0912 21:46:29.689896 1595550 default_sa.go:55] duration metric: took 199.127089ms for default service account to be created ...
	I0912 21:46:29.689907 1595550 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:46:29.772550 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:29.897042 1595550 system_pods.go:86] 17 kube-system pods found
	I0912 21:46:29.897130 1595550 system_pods.go:89] "coredns-7c65d6cfc9-g2jtl" [b45d3244-e501-473d-a897-230dc34f1077] Running
	I0912 21:46:29.897167 1595550 system_pods.go:89] "csi-hostpath-attacher-0" [92ad0877-963f-48cf-9780-3322b096d442] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:46:29.897196 1595550 system_pods.go:89] "csi-hostpath-resizer-0" [c84263fa-c16e-4996-9c7d-4cd592123beb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:46:29.897271 1595550 system_pods.go:89] "csi-hostpathplugin-whsg5" [ce162d67-971a-4cda-bdab-18421fb38423] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:46:29.897305 1595550 system_pods.go:89] "etcd-addons-648158" [9423960f-eaef-4887-b1a1-d85a94bdcf6b] Running
	I0912 21:46:29.897336 1595550 system_pods.go:89] "kube-apiserver-addons-648158" [5be520a4-b7d1-4092-b3c7-9763ac147461] Running
	I0912 21:46:29.897363 1595550 system_pods.go:89] "kube-controller-manager-addons-648158" [1e51a3ca-0473-4cb7-a8cf-e7ce80c5b580] Running
	I0912 21:46:29.897393 1595550 system_pods.go:89] "kube-ingress-dns-minikube" [d0dae086-4398-437b-b5b6-17b722bf7b0b] Running
	I0912 21:46:29.897432 1595550 system_pods.go:89] "kube-proxy-q549p" [1d5423b4-56c7-4981-a867-72374a2f1f7b] Running
	I0912 21:46:29.897459 1595550 system_pods.go:89] "kube-scheduler-addons-648158" [2491d207-d29b-4008-93bd-ac17186459f5] Running
	I0912 21:46:29.897487 1595550 system_pods.go:89] "metrics-server-84c5f94fbc-k2dzp" [eb6c8928-90e8-498f-9bc2-1e0d328da8dd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0912 21:46:29.897515 1595550 system_pods.go:89] "nvidia-device-plugin-daemonset-z4pwc" [c1e3c33a-ac28-4943-aad4-27c2cbb14eef] Running
	I0912 21:46:29.897541 1595550 system_pods.go:89] "registry-66c9cd494c-k7dbs" [4a976b45-4ffe-45bb-bf8e-8235e03fda10] Running
	I0912 21:46:29.897573 1595550 system_pods.go:89] "registry-proxy-7zbh8" [ee258d2f-09b0-4915-82e1-123bba604752] Running
	I0912 21:46:29.897610 1595550 system_pods.go:89] "snapshot-controller-56fcc65765-qh9vh" [eedbd380-8ebf-4ee3-a5f8-b988ea320828] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:46:29.897644 1595550 system_pods.go:89] "snapshot-controller-56fcc65765-wh5dd" [836602c0-e62c-4016-8973-eba07bf5ac6b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:46:29.897674 1595550 system_pods.go:89] "storage-provisioner" [7f4fc819-9ab7-484b-97c4-d3f1243ced5f] Running
	I0912 21:46:29.897713 1595550 system_pods.go:126] duration metric: took 207.793843ms to wait for k8s-apps to be running ...
	I0912 21:46:29.897740 1595550 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:46:29.897844 1595550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:46:29.922841 1595550 system_svc.go:56] duration metric: took 25.092026ms WaitForService to wait for kubelet
	I0912 21:46:29.922927 1595550 kubeadm.go:582] duration metric: took 43.285916696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:46:29.922971 1595550 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:46:30.084587 1595550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0912 21:46:30.084674 1595550 node_conditions.go:123] node cpu capacity is 2
	I0912 21:46:30.084704 1595550 node_conditions.go:105] duration metric: took 161.694723ms to run NodePressure ...
	I0912 21:46:30.084733 1595550 start.go:241] waiting for startup goroutines ...
	I0912 21:46:30.177392 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:30.270044 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:30.672081 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:30.772336 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:31.170354 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:31.270100 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:31.687457 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:31.769263 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:32.171212 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:32.269264 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:32.668925 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:32.769950 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:33.170105 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:33.270003 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:33.675988 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:33.775921 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:34.169756 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:34.269518 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:34.669476 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:34.768975 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:35.169966 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:35.271798 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:35.670794 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:35.773687 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:36.169449 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:36.269313 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:36.668836 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:36.768716 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:37.170134 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:37.269427 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:37.670977 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:37.771877 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:38.170762 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:38.270007 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:38.669148 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:38.768671 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:39.250046 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:39.268893 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:39.670016 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:39.769514 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:40.169605 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:40.271111 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:40.670718 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:40.771886 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:41.169676 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:41.269242 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:41.669799 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:41.768693 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:42.170204 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:42.273999 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:42.675722 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:42.776128 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:43.169292 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:43.269671 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:43.669761 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:43.769867 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:44.169699 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:44.269311 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:44.669778 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:44.768538 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:45.170264 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:45.271004 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:45.671118 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:45.771999 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:46.176835 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:46.268304 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:46.670382 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:46.769207 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:47.168890 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:47.271789 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:47.669808 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:47.771187 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:48.169709 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:48.269244 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:48.670127 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:48.769736 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:49.170207 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:49.274740 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:49.668716 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:49.769341 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:50.170346 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:50.269124 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:50.670320 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:50.770415 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:51.168992 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:51.268625 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:51.669567 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:51.769647 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:52.170262 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:52.269562 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:52.670047 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:52.771392 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:53.169868 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:53.268340 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:53.669875 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:53.768254 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:54.169183 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:54.268473 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:54.668916 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:54.768248 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:55.169373 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:55.268860 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:46:55.674665 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:55.772168 1595550 kapi.go:107] duration metric: took 55.508329256s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:46:56.168626 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:56.669345 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:57.169640 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:57.668824 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:58.169827 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:58.670247 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:59.169501 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:46:59.669083 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:00.179504 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:00.668846 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:01.170175 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:01.669862 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:02.174585 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:02.670221 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:03.169516 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:03.668719 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:04.169234 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:04.669589 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:05.169340 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:05.669639 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:06.171075 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:06.679265 1595550 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:47:07.170475 1595550 kapi.go:107] duration metric: took 1m10.505832645s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 21:47:24.375785 1595550 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:47:24.375814 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:24.855511 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:25.355191 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:25.854314 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:26.355546 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:26.854528 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:27.354522 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:27.855704 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:28.354999 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:28.855221 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:29.355090 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:29.855055 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:30.355044 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:30.854780 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:31.355382 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:31.855273 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:32.355372 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:32.855481 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:33.354675 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:33.854478 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:34.355683 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:34.854707 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:35.355032 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:35.855433 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:36.355693 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:36.855712 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:37.354454 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:37.854918 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:38.354931 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:38.855069 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:39.354493 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:39.855774 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:40.355558 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:40.855159 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:41.355234 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:41.854676 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:42.356044 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:42.855065 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:43.355012 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:43.855880 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:44.354711 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:44.855135 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:45.355121 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:45.855508 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:46.355130 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:46.855067 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:47.354539 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:47.855149 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:48.354588 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:48.856191 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:49.355081 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:49.858474 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:50.355061 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:50.854619 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:51.355672 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:51.855052 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:52.355214 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:52.855525 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:53.355486 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:53.854574 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:54.355201 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:54.855173 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:55.354895 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:55.854555 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:56.355592 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:56.854809 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:57.354828 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:57.855413 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:58.355473 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:58.855827 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:59.354836 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:47:59.854666 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:00.355171 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:00.856926 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:01.355634 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:01.855736 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:02.354713 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:02.855643 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:03.354094 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:03.854174 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:04.355175 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:04.854963 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:05.360000 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:05.854708 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:06.354658 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:06.854637 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:07.354953 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:07.854665 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:08.355563 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:08.855894 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:09.354869 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:09.855046 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:10.355048 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:10.854826 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:11.354406 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:11.855368 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:12.354795 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:12.854521 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:13.355278 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:13.854774 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:14.354510 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:14.857767 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:15.354685 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:15.854798 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:16.355569 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:16.854826 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:17.355266 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:17.855337 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:18.355179 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:18.855374 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:19.355044 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:19.854927 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:20.355021 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:20.857371 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:21.355150 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:21.855400 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:22.355534 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:22.855568 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:23.355759 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:23.855025 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:24.355093 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:24.856222 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:25.354230 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:25.854071 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:26.354616 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:26.855197 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:27.354253 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:27.854198 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:28.354876 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:28.855446 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:29.354987 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:29.854872 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:30.354559 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:30.855325 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:31.355470 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:31.855422 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:32.354965 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:32.856295 1595550 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:48:33.354854 1595550 kapi.go:107] duration metric: took 2m31.003674471s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:48:33.356435 1595550 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-648158 cluster.
	I0912 21:48:33.357906 1595550 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:48:33.359309 1595550 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:48:33.361304 1595550 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, default-storageclass, metrics-server, inspektor-gadget, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0912 21:48:33.363157 1595550 addons.go:510] duration metric: took 2m46.725785485s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns default-storageclass metrics-server inspektor-gadget volcano yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0912 21:48:33.363199 1595550 start.go:246] waiting for cluster config update ...
	I0912 21:48:33.363219 1595550 start.go:255] writing updated cluster config ...
	I0912 21:48:33.363498 1595550 ssh_runner.go:195] Run: rm -f paused
	I0912 21:48:33.730603 1595550 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0912 21:48:33.732519 1595550 out.go:177] * Done! kubectl is now configured to use "addons-648158" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 12 21:57:58 addons-648158 dockerd[1289]: time="2024-09-12T21:57:58.782591463Z" level=info msg="ignoring event" container=17a44cacfbdcb2f4ec16ac1bf1dcfc202467929c1d53afeecd8d3fc6f4329b5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:57:58 addons-648158 dockerd[1289]: time="2024-09-12T21:57:58.820549395Z" level=info msg="ignoring event" container=10b338530329858db82eae6608813035442f486a25ace047bf323992ccd5e39d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:57:58 addons-648158 dockerd[1289]: time="2024-09-12T21:57:58.900428538Z" level=info msg="ignoring event" container=5e4215422d2375aec0a0381ed2e145e7051002151e207a231712ee007a40e95b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:57:59 addons-648158 dockerd[1289]: time="2024-09-12T21:57:59.009891730Z" level=info msg="ignoring event" container=e1d325b34efd04a0dc1da7c980c08b49f42c84a2c254adee5bd132b45ff92198 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:57:59 addons-648158 dockerd[1289]: time="2024-09-12T21:57:59.047137752Z" level=info msg="ignoring event" container=95286393d5350846545c3a350994a74832db429cea42d0ee5c1dcd436adbe57b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:02 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:02Z" level=error msg="error getting RW layer size for container ID '07fd4a473c35ac84636124acdd02b0014320f1eec648bd5326c411ae3db57742': Error response from daemon: No such container: 07fd4a473c35ac84636124acdd02b0014320f1eec648bd5326c411ae3db57742"
	Sep 12 21:58:02 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '07fd4a473c35ac84636124acdd02b0014320f1eec648bd5326c411ae3db57742'"
	Sep 12 21:58:02 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:02Z" level=error msg="error getting RW layer size for container ID '2294ba816028ee89300d6f917714177c2d6857f8ee507a85ddc8cca5adf8ad33': Error response from daemon: No such container: 2294ba816028ee89300d6f917714177c2d6857f8ee507a85ddc8cca5adf8ad33"
	Sep 12 21:58:02 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2294ba816028ee89300d6f917714177c2d6857f8ee507a85ddc8cca5adf8ad33'"
	Sep 12 21:58:05 addons-648158 dockerd[1289]: time="2024-09-12T21:58:05.280187502Z" level=info msg="ignoring event" container=56cfe2f7a61d9ba8c2457162a002eaf57ffa0da44da9e584e78e73d699f79024 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:05 addons-648158 dockerd[1289]: time="2024-09-12T21:58:05.302788044Z" level=info msg="ignoring event" container=51f3820127ef98ae747f2b9a2b9ec5ce0521ef2308e47e9a6bd767a18126b35d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:05 addons-648158 dockerd[1289]: time="2024-09-12T21:58:05.481644842Z" level=info msg="ignoring event" container=878b88ec8c15b6a9c0b0ba5d71fc0a05ac2329978113749d52b8ffb8b0c435d3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:05 addons-648158 dockerd[1289]: time="2024-09-12T21:58:05.506173363Z" level=info msg="ignoring event" container=5211cd8d531ceca7f3224ae4eb482d14a5ad3708be1bdc75d686bb96a9a46903 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:09 addons-648158 dockerd[1289]: time="2024-09-12T21:58:09.447780138Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:58:09 addons-648158 dockerd[1289]: time="2024-09-12T21:58:09.450398710Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 12 21:58:13 addons-648158 dockerd[1289]: time="2024-09-12T21:58:13.093300283Z" level=info msg="ignoring event" container=52adf9fce3141d54f4b7944ff34c9e4932a45c423a7b39753b3252155eb946e5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:13 addons-648158 dockerd[1289]: time="2024-09-12T21:58:13.214984652Z" level=info msg="ignoring event" container=d87856d84e8fe7e3e4367d44f238b08d775909aace632b56a45818021458fbde module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:18 addons-648158 dockerd[1289]: time="2024-09-12T21:58:18.755570688Z" level=info msg="ignoring event" container=dc56e3507ff917af8589f93b70a47c46a3a4f5a1ce2d37d328bb085263934d01 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:24 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/51674a28bdbb97666e111c4f37326466e4b4466344f77a64f6eb59ecba596213/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 12 21:58:26 addons-648158 cri-dockerd[1547]: time="2024-09-12T21:58:26Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Sep 12 21:58:27 addons-648158 dockerd[1289]: time="2024-09-12T21:58:27.666887691Z" level=info msg="ignoring event" container=a2155d553aab7fc161e68225e40ce026fe0d51c360f87c7dc63997bb67fded04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:28 addons-648158 dockerd[1289]: time="2024-09-12T21:58:28.383682927Z" level=info msg="ignoring event" container=3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:28 addons-648158 dockerd[1289]: time="2024-09-12T21:58:28.463374531Z" level=info msg="ignoring event" container=672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:28 addons-648158 dockerd[1289]: time="2024-09-12T21:58:28.588014154Z" level=info msg="ignoring event" container=16d9387dee557367d8e5641c9c0386d812e0e3945335f3d5294a2681ff76c5ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 12 21:58:28 addons-648158 dockerd[1289]: time="2024-09-12T21:58:28.819239909Z" level=info msg="ignoring event" container=aa6af1ff5693aae2cd14b170cd775ec554d34f4c7ec1db00cb6cfda508dd1b72 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	2f002ec003e27       nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf                                                3 seconds ago       Running             nginx                      0                   51674a28bdbb9       nginx
	5aa737c3896b8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   19da29b2da55a       gcp-auth-89d5ffd79-s7q4h
	5d67e1b5df7f7       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   4211f01616c4c       ingress-nginx-controller-bc57996ff-696bh
	3ee823d52330f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   211b5fa5c3a03       ingress-nginx-admission-patch-ssbgc
	7ea4889619ec7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   a8e90a2d2f882       ingress-nginx-admission-create-wjqxw
	f0864ef4b730b       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       11 minutes ago      Running             local-path-provisioner     0                   896593063d650       local-path-provisioner-86d989889c-xbncw
	e374ff2318f0f       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   78452ebde635f       yakd-dashboard-67d98fc6b-n5gz7
	c347945eead17       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   4178c11db9b3f       kube-ingress-dns-minikube
	cbe25e6874536       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   6daffe1a9d10c       cloud-spanner-emulator-769b77f747-cdpm7
	5596da7bfeb4b       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   65021917ba0b0       nvidia-device-plugin-daemonset-z4pwc
	046920352d77b       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   83a44bd844c96       storage-provisioner
	5c398510d84ba       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                    0                   9546c123c461d       coredns-7c65d6cfc9-g2jtl
	19937b7e96a03       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                 0                   60896dc860310       kube-proxy-q549p
	fdf5b03dfd7ab       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   5f8f5a06066b8       kube-scheduler-addons-648158
	229fc23ce1858       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   0a9495f60249b       etcd-addons-648158
	2721a50c3ab1c       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   d87f312af29b3       kube-controller-manager-addons-648158
	32402b3960159       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver             0                   5d29da3ec3995       kube-apiserver-addons-648158
	
	
	==> controller_ingress [5d67e1b5df7f] <==
	I0912 21:47:07.397311       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"c33493ec-51e6-4ab9-a543-52417f292017", APIVersion:"v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0912 21:47:07.399377       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"dd562a3d-e902-4326-8410-bd0d19832ac6", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0912 21:47:07.399550       8 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"c6893e2b-37f5-4831-8e2a-92bee50dee61", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0912 21:47:08.586389       8 nginx.go:317] "Starting NGINX process"
	I0912 21:47:08.586623       8 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0912 21:47:08.587092       8 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0912 21:47:08.587320       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0912 21:47:08.615250       8 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0912 21:47:08.615430       8 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-696bh"
	I0912 21:47:08.627030       8 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-696bh" node="addons-648158"
	I0912 21:47:08.650858       8 controller.go:213] "Backend successfully reloaded"
	I0912 21:47:08.651078       8 controller.go:224] "Initial sync, sleeping for 1 second"
	I0912 21:47:08.651194       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-696bh", UID:"52c0720b-3981-4108-8256-513a00d49197", APIVersion:"v1", ResourceVersion:"1245", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0912 21:58:24.195943       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0912 21:58:24.216840       8 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.021s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.021s testedConfigurationSize:18.1kB}
	I0912 21:58:24.216879       8 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0912 21:58:24.223362       8 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0912 21:58:24.223690       8 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0912 21:58:24.223762       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0912 21:58:24.225362       8 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"c6fb3b37-278e-4c23-ae6b-a1cab589f6d6", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2778", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0912 21:58:24.288725       8 controller.go:213] "Backend successfully reloaded"
	I0912 21:58:24.289148       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-696bh", UID:"52c0720b-3981-4108-8256-513a00d49197", APIVersion:"v1", ResourceVersion:"1245", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0912 21:58:27.557462       8 controller.go:193] "Configuration changes detected, backend reload required"
	I0912 21:58:27.604298       8 controller.go:213] "Backend successfully reloaded"
	I0912 21:58:27.604600       8 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-696bh", UID:"52c0720b-3981-4108-8256-513a00d49197", APIVersion:"v1", ResourceVersion:"1245", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [5c398510d84b] <==
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	[INFO] Reloading complete
	[INFO] 127.0.0.1:58409 - 24075 "HINFO IN 8776178255420352184.1321134477801143313. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01465829s
	[INFO] 10.244.0.7:54519 - 13842 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000319804s
	[INFO] 10.244.0.7:54519 - 46870 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122418s
	[INFO] 10.244.0.7:51322 - 1341 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000169342s
	[INFO] 10.244.0.7:51322 - 41784 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000129925s
	[INFO] 10.244.0.7:38911 - 28253 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000109938s
	[INFO] 10.244.0.7:38911 - 51547 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000224643s
	[INFO] 10.244.0.7:37612 - 54543 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093709s
	[INFO] 10.244.0.7:37612 - 40201 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088589s
	[INFO] 10.244.0.7:59467 - 33407 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002662648s
	[INFO] 10.244.0.7:59467 - 33282 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002996769s
	[INFO] 10.244.0.7:55106 - 59925 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000125864s
	[INFO] 10.244.0.7:55106 - 39959 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000071785s
	[INFO] 10.244.0.25:43052 - 3287 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00048236s
	[INFO] 10.244.0.25:36270 - 1884 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00025981s
	[INFO] 10.244.0.25:45228 - 57249 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000306848s
	[INFO] 10.244.0.25:48284 - 58032 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000492058s
	[INFO] 10.244.0.25:39213 - 6518 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000269835s
	[INFO] 10.244.0.25:48722 - 48718 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000268343s
	[INFO] 10.244.0.25:40526 - 5209 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002962407s
	[INFO] 10.244.0.25:33228 - 32375 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003231603s
	[INFO] 10.244.0.25:50767 - 1726 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002239782s
	[INFO] 10.244.0.25:55250 - 34195 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002946104s
	
	
	==> describe nodes <==
	Name:               addons-648158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-648158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f6bc674a17941874d4e5b792b09c1791d30622b8
	                    minikube.k8s.io/name=addons-648158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_12T21_45_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-648158
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 12 Sep 2024 21:45:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-648158
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 12 Sep 2024 21:58:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 12 Sep 2024 21:54:21 +0000   Thu, 12 Sep 2024 21:45:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 12 Sep 2024 21:54:21 +0000   Thu, 12 Sep 2024 21:45:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 12 Sep 2024 21:54:21 +0000   Thu, 12 Sep 2024 21:45:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 12 Sep 2024 21:54:21 +0000   Thu, 12 Sep 2024 21:45:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-648158
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7cd4665a5ebe4de7b1de26fc1d9805e5
	  System UUID:                1b17f2fa-7c9a-437f-82b1-3b9942bbda88
	  Boot ID:                    f14c6faf-727c-4a6f-be07-d8fb37c7dc91
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-cdpm7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  gcp-auth                    gcp-auth-89d5ffd79-s7q4h                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-696bh    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-g2jtl                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-648158                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-648158                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-648158       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-q549p                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-648158                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-z4pwc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-xbncw     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-n5gz7              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             388Mi (4%)  426Mi (5%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-648158 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-648158 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-648158 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-648158 event: Registered Node addons-648158 in Controller
	
	
	==> dmesg <==
	[Sep12 21:14] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000008] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Sep12 21:18] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
	
	
	==> etcd [229fc23ce185] <==
	{"level":"info","ts":"2024-09-12T21:45:35.351401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-12T21:45:35.351476Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-12T21:45:36.325063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-12T21:45:36.325164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-12T21:45:36.325246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-12T21:45:36.325297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-12T21:45:36.325346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-12T21:45:36.325396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-12T21:45:36.325425Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-12T21:45:36.328713Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:45:36.333180Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-648158 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-12T21:45:36.333413Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:45:36.333788Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-12T21:45:36.333991Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-12T21:45:36.334038Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-12T21:45:36.335165Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:45:36.336122Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-12T21:45:36.341105Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:45:36.341236Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:45:36.341313Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-12T21:45:36.342105Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-12T21:45:36.343074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-12T21:55:36.488078Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1859}
	{"level":"info","ts":"2024-09-12T21:55:36.551492Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1859,"took":"62.401119ms","hash":3646322125,"current-db-size-bytes":9003008,"current-db-size":"9.0 MB","current-db-size-in-use-bytes":4952064,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-12T21:55:36.551548Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3646322125,"revision":1859,"compact-revision":-1}
	
	
	==> gcp-auth [5aa737c3896b] <==
	2024/09/12 21:48:32 GCP Auth Webhook started!
	2024/09/12 21:48:50 Ready to marshal response ...
	2024/09/12 21:48:50 Ready to write response ...
	2024/09/12 21:48:50 Ready to marshal response ...
	2024/09/12 21:48:50 Ready to write response ...
	2024/09/12 21:49:13 Ready to marshal response ...
	2024/09/12 21:49:13 Ready to write response ...
	2024/09/12 21:49:14 Ready to marshal response ...
	2024/09/12 21:49:14 Ready to write response ...
	2024/09/12 21:49:14 Ready to marshal response ...
	2024/09/12 21:49:14 Ready to write response ...
	2024/09/12 21:57:24 Ready to marshal response ...
	2024/09/12 21:57:24 Ready to write response ...
	2024/09/12 21:57:27 Ready to marshal response ...
	2024/09/12 21:57:27 Ready to write response ...
	2024/09/12 21:57:49 Ready to marshal response ...
	2024/09/12 21:57:49 Ready to write response ...
	2024/09/12 21:58:24 Ready to marshal response ...
	2024/09/12 21:58:24 Ready to write response ...
	
	
	==> kernel <==
	 21:58:29 up  6:40,  0 users,  load average: 1.53, 1.02, 1.78
	Linux addons-648158 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [32402b396015] <==
	W0912 21:49:05.789737       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0912 21:49:05.832384       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0912 21:49:05.867941       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0912 21:49:06.262602       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0912 21:49:06.440846       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0912 21:57:32.801324       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E0912 21:57:34.964620       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E0912 21:57:57.414726       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I0912 21:58:04.976109       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:58:04.976159       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:58:05.006191       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:58:05.006260       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:58:05.015169       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:58:05.015236       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:58:05.041792       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:58:05.041839       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:58:05.195212       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:58:05.195257       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0912 21:58:06.016796       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0912 21:58:06.195662       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0912 21:58:06.223587       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0912 21:58:18.643675       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0912 21:58:19.669334       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0912 21:58:24.217892       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0912 21:58:24.543092       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.231.111"}
	
	
	==> kube-controller-manager [2721a50c3ab1] <==
	I0912 21:58:16.144780       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0912 21:58:16.144825       1 shared_informer.go:320] Caches are synced for resource quota
	I0912 21:58:16.403035       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0912 21:58:16.403088       1 shared_informer.go:320] Caches are synced for garbage collector
	W0912 21:58:16.786856       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:58:16.786986       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:58:17.660020       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:58:17.660064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0912 21:58:19.670948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:58:21.188758       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:58:21.188798       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:58:22.253411       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:58:22.253453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:58:22.890316       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:58:22.890357       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:58:24.156110       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:58:24.156160       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:58:27.590408       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:58:27.590469       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0912 21:58:27.673783       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:58:27.673834       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0912 21:58:28.282551       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.104µs"
	I0912 21:58:28.755736       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0912 21:58:29.821979       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:58:29.822018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [19937b7e96a0] <==
	I0912 21:45:47.952842       1 server_linux.go:66] "Using iptables proxy"
	I0912 21:45:48.094838       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0912 21:45:48.094915       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0912 21:45:48.137889       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 21:45:48.137978       1 server_linux.go:169] "Using iptables Proxier"
	I0912 21:45:48.140113       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0912 21:45:48.140413       1 server.go:483] "Version info" version="v1.31.1"
	I0912 21:45:48.140427       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:45:48.141815       1 config.go:199] "Starting service config controller"
	I0912 21:45:48.141856       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0912 21:45:48.141885       1 config.go:105] "Starting endpoint slice config controller"
	I0912 21:45:48.141889       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0912 21:45:48.145003       1 config.go:328] "Starting node config controller"
	I0912 21:45:48.145045       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0912 21:45:48.242327       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0912 21:45:48.242401       1 shared_informer.go:320] Caches are synced for service config
	I0912 21:45:48.245969       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [fdf5b03dfd7a] <==
	E0912 21:45:38.855250       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:38.854121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:45:38.855435       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:38.854193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 21:45:38.855632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:38.854237       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:45:38.855846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:38.854283       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:45:38.856027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0912 21:45:38.856167       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:39.685057       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:45:39.685294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:39.686564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:45:39.686757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:39.688821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:45:39.688851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:39.697690       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:45:39.697734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:39.732230       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:45:39.732510       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0912 21:45:39.777106       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 21:45:39.777147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0912 21:45:39.821395       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:45:39.821438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0912 21:45:42.622076       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 12 21:58:26 addons-648158 kubelet[2335]: E0912 21:58:26.181243    2335 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="4e96a45b-06bb-4568-9f4e-c7824346aa4d"
	Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.477491    2335 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=1.817970224 podStartE2EDuration="3.477468367s" podCreationTimestamp="2024-09-12 21:58:24 +0000 UTC" firstStartedPulling="2024-09-12 21:58:25.033560295 +0000 UTC m=+764.020565768" lastFinishedPulling="2024-09-12 21:58:26.693058437 +0000 UTC m=+765.680063911" observedRunningTime="2024-09-12 21:58:27.065886935 +0000 UTC m=+766.052892426" watchObservedRunningTime="2024-09-12 21:58:27.477468367 +0000 UTC m=+766.464473858"
	Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.824672    2335 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f6bb46dc-e671-4c74-b57f-680c90cfb909-gcp-creds\") pod \"f6bb46dc-e671-4c74-b57f-680c90cfb909\" (UID: \"f6bb46dc-e671-4c74-b57f-680c90cfb909\") "
	Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.824727    2335 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtx86\" (UniqueName: \"kubernetes.io/projected/f6bb46dc-e671-4c74-b57f-680c90cfb909-kube-api-access-qtx86\") pod \"f6bb46dc-e671-4c74-b57f-680c90cfb909\" (UID: \"f6bb46dc-e671-4c74-b57f-680c90cfb909\") "
	Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.825119    2335 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6bb46dc-e671-4c74-b57f-680c90cfb909-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f6bb46dc-e671-4c74-b57f-680c90cfb909" (UID: "f6bb46dc-e671-4c74-b57f-680c90cfb909"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.844761    2335 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bb46dc-e671-4c74-b57f-680c90cfb909-kube-api-access-qtx86" (OuterVolumeSpecName: "kube-api-access-qtx86") pod "f6bb46dc-e671-4c74-b57f-680c90cfb909" (UID: "f6bb46dc-e671-4c74-b57f-680c90cfb909"). InnerVolumeSpecName "kube-api-access-qtx86". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.925283    2335 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f6bb46dc-e671-4c74-b57f-680c90cfb909-gcp-creds\") on node \"addons-648158\" DevicePath \"\""
	Sep 12 21:58:27 addons-648158 kubelet[2335]: I0912 21:58:27.925320    2335 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qtx86\" (UniqueName: \"kubernetes.io/projected/f6bb46dc-e671-4c74-b57f-680c90cfb909-kube-api-access-qtx86\") on node \"addons-648158\" DevicePath \"\""
	Sep 12 21:58:28 addons-648158 kubelet[2335]: I0912 21:58:28.835870    2335 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj4gc\" (UniqueName: \"kubernetes.io/projected/4a976b45-4ffe-45bb-bf8e-8235e03fda10-kube-api-access-rj4gc\") pod \"4a976b45-4ffe-45bb-bf8e-8235e03fda10\" (UID: \"4a976b45-4ffe-45bb-bf8e-8235e03fda10\") "
	Sep 12 21:58:28 addons-648158 kubelet[2335]: I0912 21:58:28.838614    2335 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a976b45-4ffe-45bb-bf8e-8235e03fda10-kube-api-access-rj4gc" (OuterVolumeSpecName: "kube-api-access-rj4gc") pod "4a976b45-4ffe-45bb-bf8e-8235e03fda10" (UID: "4a976b45-4ffe-45bb-bf8e-8235e03fda10"). InnerVolumeSpecName "kube-api-access-rj4gc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:58:28 addons-648158 kubelet[2335]: I0912 21:58:28.936948    2335 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rj4gc\" (UniqueName: \"kubernetes.io/projected/4a976b45-4ffe-45bb-bf8e-8235e03fda10-kube-api-access-rj4gc\") on node \"addons-648158\" DevicePath \"\""
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.037680    2335 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgcfm\" (UniqueName: \"kubernetes.io/projected/ee258d2f-09b0-4915-82e1-123bba604752-kube-api-access-lgcfm\") pod \"ee258d2f-09b0-4915-82e1-123bba604752\" (UID: \"ee258d2f-09b0-4915-82e1-123bba604752\") "
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.039800    2335 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee258d2f-09b0-4915-82e1-123bba604752-kube-api-access-lgcfm" (OuterVolumeSpecName: "kube-api-access-lgcfm") pod "ee258d2f-09b0-4915-82e1-123bba604752" (UID: "ee258d2f-09b0-4915-82e1-123bba604752"). InnerVolumeSpecName "kube-api-access-lgcfm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.102781    2335 scope.go:117] "RemoveContainer" containerID="672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.139708    2335 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-lgcfm\" (UniqueName: \"kubernetes.io/projected/ee258d2f-09b0-4915-82e1-123bba604752-kube-api-access-lgcfm\") on node \"addons-648158\" DevicePath \"\""
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.163249    2335 scope.go:117] "RemoveContainer" containerID="672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: E0912 21:58:29.166068    2335 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10" containerID="672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.166148    2335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"} err="failed to get container status \"672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10\": rpc error: code = Unknown desc = Error response from daemon: No such container: 672ba13f023168f86ba9cce29bfc0911c5930bf0025f14ce4a00a8f0f30bdd10"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.166176    2335 scope.go:117] "RemoveContainer" containerID="3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.197897    2335 scope.go:117] "RemoveContainer" containerID="3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: E0912 21:58:29.199161    2335 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd" containerID="3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.199306    2335 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"} err="failed to get container status \"3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd\": rpc error: code = Unknown desc = Error response from daemon: No such container: 3c6f07b71fbce19e20212ab8871ea4a36f87266b382cb120ca38eea6e30afacd"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.205655    2335 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a976b45-4ffe-45bb-bf8e-8235e03fda10" path="/var/lib/kubelet/pods/4a976b45-4ffe-45bb-bf8e-8235e03fda10/volumes"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.207571    2335 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee258d2f-09b0-4915-82e1-123bba604752" path="/var/lib/kubelet/pods/ee258d2f-09b0-4915-82e1-123bba604752/volumes"
	Sep 12 21:58:29 addons-648158 kubelet[2335]: I0912 21:58:29.210653    2335 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6bb46dc-e671-4c74-b57f-680c90cfb909" path="/var/lib/kubelet/pods/f6bb46dc-e671-4c74-b57f-680c90cfb909/volumes"
	
	
	==> storage-provisioner [046920352d77] <==
	I0912 21:45:53.574326       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:45:53.590499       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:45:53.590544       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:45:53.602104       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:45:53.602444       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-648158_86a231c6-688e-464b-b16e-4dbe50672663!
	I0912 21:45:53.603160       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2c5fce5c-cd93-4709-b47b-dcd1c6fac236", APIVersion:"v1", ResourceVersion:"504", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-648158_86a231c6-688e-464b-b16e-4dbe50672663 became leader
	I0912 21:45:53.702814       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-648158_86a231c6-688e-464b-b16e-4dbe50672663!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-648158 -n addons-648158
helpers_test.go:261: (dbg) Run:  kubectl --context addons-648158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-wjqxw ingress-nginx-admission-patch-ssbgc
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-648158 describe pod busybox ingress-nginx-admission-create-wjqxw ingress-nginx-admission-patch-ssbgc
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-648158 describe pod busybox ingress-nginx-admission-create-wjqxw ingress-nginx-admission-patch-ssbgc: exit status 1 (98.911663ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-648158/192.168.49.2
	Start Time:       Thu, 12 Sep 2024 21:49:14 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ltfqw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ltfqw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m16s                  default-scheduler  Successfully assigned default/busybox to addons-648158
	  Normal   Pulling    7m56s (x4 over 9m16s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m56s (x4 over 9m16s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m56s (x4 over 9m16s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m28s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m4s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-wjqxw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ssbgc" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-648158 describe pod busybox ingress-nginx-admission-create-wjqxw ingress-nginx-admission-patch-ssbgc: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.60s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.59
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.11
9 TestDownloadOnly/v1.20.0/DeleteAll 0.32
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.17
12 TestDownloadOnly/v1.31.1/json-events 5.03
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
22 TestOffline 62.49
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 221.49
29 TestAddons/serial/Volcano 40.02
31 TestAddons/serial/GCPAuth/Namespaces 0.18
34 TestAddons/parallel/Ingress 19.43
35 TestAddons/parallel/InspektorGadget 11.72
36 TestAddons/parallel/MetricsServer 6.72
39 TestAddons/parallel/CSI 48.16
40 TestAddons/parallel/Headlamp 18.64
41 TestAddons/parallel/CloudSpanner 5.48
42 TestAddons/parallel/LocalPath 53.47
43 TestAddons/parallel/NvidiaDevicePlugin 6.47
44 TestAddons/parallel/Yakd 11.69
45 TestAddons/StoppedEnableDisable 6.01
46 TestCertOptions 35.54
47 TestCertExpiration 248.71
48 TestDockerFlags 45
49 TestForceSystemdFlag 44.58
50 TestForceSystemdEnv 45.33
56 TestErrorSpam/setup 31.98
57 TestErrorSpam/start 0.75
58 TestErrorSpam/status 1.07
59 TestErrorSpam/pause 1.44
60 TestErrorSpam/unpause 1.49
61 TestErrorSpam/stop 11.04
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 45.49
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 36.84
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.11
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.34
73 TestFunctional/serial/CacheCmd/cache/add_local 1.05
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.65
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 41.44
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.17
84 TestFunctional/serial/LogsFileCmd 1.19
85 TestFunctional/serial/InvalidService 4.57
87 TestFunctional/parallel/ConfigCmd 0.43
88 TestFunctional/parallel/DashboardCmd 10.94
89 TestFunctional/parallel/DryRun 0.45
90 TestFunctional/parallel/InternationalLanguage 0.21
91 TestFunctional/parallel/StatusCmd 1.35
95 TestFunctional/parallel/ServiceCmdConnect 12.64
96 TestFunctional/parallel/AddonsCmd 0.23
97 TestFunctional/parallel/PersistentVolumeClaim 27.96
99 TestFunctional/parallel/SSHCmd 0.66
100 TestFunctional/parallel/CpCmd 2.36
102 TestFunctional/parallel/FileSync 0.37
103 TestFunctional/parallel/CertSync 2.09
107 TestFunctional/parallel/NodeLabels 0.08
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.31
111 TestFunctional/parallel/License 0.19
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.43
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
125 TestFunctional/parallel/ServiceCmd/List 0.6
126 TestFunctional/parallel/ProfileCmd/profile_list 0.47
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
130 TestFunctional/parallel/MountCmd/any-port 8.68
131 TestFunctional/parallel/ServiceCmd/Format 0.4
132 TestFunctional/parallel/ServiceCmd/URL 0.49
133 TestFunctional/parallel/MountCmd/specific-port 2.3
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.57
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.16
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.45
142 TestFunctional/parallel/ImageCommands/Setup 0.83
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
146 TestFunctional/parallel/DockerEnv/bash 1.17
147 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.23
148 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.01
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.21
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 121.9
161 TestMultiControlPlane/serial/DeployApp 41.33
162 TestMultiControlPlane/serial/PingHostFromPods 1.62
163 TestMultiControlPlane/serial/AddWorkerNode 28.05
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.77
166 TestMultiControlPlane/serial/CopyFile 19.84
167 TestMultiControlPlane/serial/StopSecondaryNode 11.78
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.6
169 TestMultiControlPlane/serial/RestartSecondaryNode 30.81
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 16.21
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 224.51
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.21
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
174 TestMultiControlPlane/serial/StopCluster 33.04
175 TestMultiControlPlane/serial/RestartCluster 87.89
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.51
177 TestMultiControlPlane/serial/AddSecondaryNode 45.94
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
181 TestImageBuild/serial/Setup 31.31
182 TestImageBuild/serial/NormalBuild 1.81
183 TestImageBuild/serial/BuildWithBuildArg 1.03
184 TestImageBuild/serial/BuildWithDockerIgnore 0.91
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.76
189 TestJSONOutput/start/Command 76.16
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.68
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.57
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 10.94
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 36.12
215 TestKicCustomNetwork/use_default_bridge_network 32.8
216 TestKicExistingNetwork 34.81
217 TestKicCustomSubnet 32.26
218 TestKicStaticIP 32.86
219 TestMainNoArgs 0.05
220 TestMinikubeProfile 72.1
223 TestMountStart/serial/StartWithMountFirst 8.51
224 TestMountStart/serial/VerifyMountFirst 0.27
225 TestMountStart/serial/StartWithMountSecond 8.3
226 TestMountStart/serial/VerifyMountSecond 0.26
227 TestMountStart/serial/DeleteFirst 1.48
228 TestMountStart/serial/VerifyMountPostDelete 0.26
229 TestMountStart/serial/Stop 1.2
230 TestMountStart/serial/RestartStopped 8.49
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 82.79
235 TestMultiNode/serial/DeployApp2Nodes 39.13
236 TestMultiNode/serial/PingHostFrom2Pods 1.02
237 TestMultiNode/serial/AddNode 20.6
238 TestMultiNode/serial/MultiNodeLabels 0.1
239 TestMultiNode/serial/ProfileList 0.37
240 TestMultiNode/serial/CopyFile 10.14
241 TestMultiNode/serial/StopNode 2.28
242 TestMultiNode/serial/StartAfterStop 11.32
243 TestMultiNode/serial/RestartKeepsNodes 98.19
244 TestMultiNode/serial/DeleteNode 5.61
245 TestMultiNode/serial/StopMultiNode 21.67
246 TestMultiNode/serial/RestartMultiNode 55.75
247 TestMultiNode/serial/ValidateNameConflict 37.09
252 TestPreload 142.52
254 TestScheduledStopUnix 105.37
255 TestSkaffold 114.31
257 TestInsufficientStorage 10.92
258 TestRunningBinaryUpgrade 101.29
260 TestKubernetesUpgrade 371.57
261 TestMissingContainerUpgrade 168.63
263 TestPause/serial/Start 85.3
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
266 TestNoKubernetes/serial/StartWithK8s 34.74
267 TestPause/serial/SecondStartNoReconfiguration 31.81
268 TestNoKubernetes/serial/StartWithStopK8s 18.39
269 TestNoKubernetes/serial/Start 8.78
270 TestPause/serial/Pause 1.35
271 TestPause/serial/VerifyStatus 0.55
272 TestPause/serial/Unpause 0.7
273 TestPause/serial/PauseAgain 0.94
274 TestPause/serial/DeletePaused 2.29
275 TestPause/serial/VerifyDeletedResources 0.57
287 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
288 TestNoKubernetes/serial/ProfileList 0.76
289 TestNoKubernetes/serial/Stop 1.25
290 TestNoKubernetes/serial/StartNoArgs 8.83
291 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
292 TestStoppedBinaryUpgrade/Setup 0.72
293 TestStoppedBinaryUpgrade/Upgrade 103.39
301 TestNetworkPlugins/group/auto/Start 83.77
302 TestStoppedBinaryUpgrade/MinikubeLogs 2.37
303 TestNetworkPlugins/group/kindnet/Start 59.1
304 TestNetworkPlugins/group/auto/KubeletFlags 0.44
305 TestNetworkPlugins/group/auto/NetCatPod 13.4
306 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
307 TestNetworkPlugins/group/auto/DNS 0.19
308 TestNetworkPlugins/group/auto/Localhost 0.19
309 TestNetworkPlugins/group/auto/HairPin 0.19
310 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
311 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
312 TestNetworkPlugins/group/kindnet/DNS 0.29
313 TestNetworkPlugins/group/kindnet/Localhost 0.41
314 TestNetworkPlugins/group/kindnet/HairPin 0.23
315 TestNetworkPlugins/group/calico/Start 87.8
316 TestNetworkPlugins/group/custom-flannel/Start 64.74
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
319 TestNetworkPlugins/group/calico/ControllerPod 6.01
320 TestNetworkPlugins/group/custom-flannel/DNS 0.21
321 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
322 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
323 TestNetworkPlugins/group/calico/KubeletFlags 0.3
324 TestNetworkPlugins/group/calico/NetCatPod 11.32
325 TestNetworkPlugins/group/calico/DNS 0.28
326 TestNetworkPlugins/group/calico/Localhost 0.35
327 TestNetworkPlugins/group/calico/HairPin 0.24
328 TestNetworkPlugins/group/false/Start 86.04
329 TestNetworkPlugins/group/enable-default-cni/Start 53.43
330 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
331 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.31
332 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
333 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
334 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
335 TestNetworkPlugins/group/false/KubeletFlags 0.29
336 TestNetworkPlugins/group/false/NetCatPod 12.28
337 TestNetworkPlugins/group/false/DNS 0.3
338 TestNetworkPlugins/group/false/Localhost 0.24
339 TestNetworkPlugins/group/false/HairPin 0.23
340 TestNetworkPlugins/group/flannel/Start 59.88
341 TestNetworkPlugins/group/bridge/Start 82.88
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
344 TestNetworkPlugins/group/flannel/NetCatPod 10.37
345 TestNetworkPlugins/group/flannel/DNS 0.18
346 TestNetworkPlugins/group/flannel/Localhost 0.17
347 TestNetworkPlugins/group/flannel/HairPin 0.16
348 TestNetworkPlugins/group/kubenet/Start 85.24
349 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
350 TestNetworkPlugins/group/bridge/NetCatPod 10.47
351 TestNetworkPlugins/group/bridge/DNS 0.25
352 TestNetworkPlugins/group/bridge/Localhost 0.2
353 TestNetworkPlugins/group/bridge/HairPin 0.23
355 TestStartStop/group/old-k8s-version/serial/FirstStart 145.17
356 TestNetworkPlugins/group/kubenet/KubeletFlags 0.4
357 TestNetworkPlugins/group/kubenet/NetCatPod 13.45
358 TestNetworkPlugins/group/kubenet/DNS 0.2
359 TestNetworkPlugins/group/kubenet/Localhost 0.19
360 TestNetworkPlugins/group/kubenet/HairPin 0.18
362 TestStartStop/group/no-preload/serial/FirstStart 86.14
363 TestStartStop/group/old-k8s-version/serial/DeployApp 10.51
364 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.28
365 TestStartStop/group/old-k8s-version/serial/Stop 11.07
366 TestStartStop/group/no-preload/serial/DeployApp 9.44
367 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
368 TestStartStop/group/old-k8s-version/serial/SecondStart 140.6
369 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.72
370 TestStartStop/group/no-preload/serial/Stop 11
371 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.34
372 TestStartStop/group/no-preload/serial/SecondStart 268.17
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
376 TestStartStop/group/old-k8s-version/serial/Pause 2.88
378 TestStartStop/group/embed-certs/serial/FirstStart 47.39
379 TestStartStop/group/embed-certs/serial/DeployApp 8.38
380 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.07
381 TestStartStop/group/embed-certs/serial/Stop 11.05
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
383 TestStartStop/group/embed-certs/serial/SecondStart 289.4
384 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
387 TestStartStop/group/no-preload/serial/Pause 3.33
389 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.4
390 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.37
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.19
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 11
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.05
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
398 TestStartStop/group/embed-certs/serial/Pause 2.92
400 TestStartStop/group/newest-cni/serial/FirstStart 40.38
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.17
403 TestStartStop/group/newest-cni/serial/Stop 5.73
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
405 TestStartStop/group/newest-cni/serial/SecondStart 18.05
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
409 TestStartStop/group/newest-cni/serial/Pause 3.02
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.8
x
+
TestDownloadOnly/v1.20.0/json-events (15.59s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-658229 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-658229 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (15.590026457s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.59s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-658229
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-658229: exit status 85 (104.628871ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-658229 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC |          |
	|         | -p download-only-658229        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:44:29
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:44:29.298367 1594800 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:44:29.298487 1594800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:44:29.298498 1594800 out.go:358] Setting ErrFile to fd 2...
	I0912 21:44:29.298503 1594800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:44:29.298755 1594800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
	W0912 21:44:29.298880 1594800 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19616-1589418/.minikube/config/config.json: open /home/jenkins/minikube-integration/19616-1589418/.minikube/config/config.json: no such file or directory
	I0912 21:44:29.299275 1594800 out.go:352] Setting JSON to true
	I0912 21:44:29.300177 1594800 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23210,"bootTime":1726154260,"procs":163,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0912 21:44:29.300246 1594800 start.go:139] virtualization:  
	I0912 21:44:29.302799 1594800 out.go:97] [download-only-658229] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0912 21:44:29.302930 1594800 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:44:29.302960 1594800 notify.go:220] Checking for updates...
	I0912 21:44:29.304538 1594800 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:44:29.305881 1594800 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:44:29.307397 1594800 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	I0912 21:44:29.308787 1594800 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	I0912 21:44:29.310194 1594800 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0912 21:44:29.312846 1594800 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 21:44:29.313126 1594800 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:44:29.337109 1594800 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 21:44:29.337205 1594800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:44:29.402893 1594800 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-12 21:44:29.393325216 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:44:29.403033 1594800 docker.go:318] overlay module found
	I0912 21:44:29.404493 1594800 out.go:97] Using the docker driver based on user configuration
	I0912 21:44:29.404518 1594800 start.go:297] selected driver: docker
	I0912 21:44:29.404525 1594800 start.go:901] validating driver "docker" against <nil>
	I0912 21:44:29.404638 1594800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:44:29.462386 1594800 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-12 21:44:29.452798743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:44:29.462544 1594800 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:44:29.462837 1594800 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0912 21:44:29.463001 1594800 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 21:44:29.464531 1594800 out.go:169] Using Docker driver with root privileges
	I0912 21:44:29.465859 1594800 cni.go:84] Creating CNI manager for ""
	I0912 21:44:29.465889 1594800 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0912 21:44:29.466085 1594800 start.go:340] cluster config:
	{Name:download-only-658229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-658229 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 21:44:29.467461 1594800 out.go:97] Starting "download-only-658229" primary control-plane node in "download-only-658229" cluster
	I0912 21:44:29.467485 1594800 cache.go:121] Beginning downloading kic base image for docker with docker
	I0912 21:44:29.468729 1594800 out.go:97] Pulling base image v0.0.45-1726156396-19616 ...
	I0912 21:44:29.468771 1594800 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 21:44:29.468927 1594800 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local docker daemon
	I0912 21:44:29.483304 1594800 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:44:29.483946 1594800 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 in local cache directory
	I0912 21:44:29.484049 1594800 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 to local cache
	I0912 21:44:29.526943 1594800 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0912 21:44:29.526976 1594800 cache.go:56] Caching tarball of preloaded images
	I0912 21:44:29.527147 1594800 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 21:44:29.528883 1594800 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0912 21:44:29.528906 1594800 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 21:44:29.613551 1594800 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0912 21:44:33.675876 1594800 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 21:44:33.676048 1594800 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0912 21:44:34.676389 1594800 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0912 21:44:34.676846 1594800 profile.go:143] Saving config to /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/download-only-658229/config.json ...
	I0912 21:44:34.676900 1594800 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/download-only-658229/config.json: {Name:mk444db95f1f7f6e7613f02549be917afea7a8a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:34.677577 1594800 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0912 21:44:34.677831 1594800 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19616-1589418/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-658229 host does not exist
	  To start a cluster, run: "minikube start -p download-only-658229"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-658229
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-308645 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-308645 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.026264646s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-308645
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-308645: exit status 85 (68.858747ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-658229 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC |                     |
	|         | -p download-only-658229        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| delete  | -p download-only-658229        | download-only-658229 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC | 12 Sep 24 21:44 UTC |
	| start   | -o=json --download-only        | download-only-308645 | jenkins | v1.34.0 | 12 Sep 24 21:44 UTC |                     |
	|         | -p download-only-308645        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/12 21:44:45
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:44:45.490655 1595000 out.go:345] Setting OutFile to fd 1 ...
	I0912 21:44:45.490802 1595000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:44:45.490813 1595000 out.go:358] Setting ErrFile to fd 2...
	I0912 21:44:45.490818 1595000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 21:44:45.491067 1595000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
	I0912 21:44:45.491478 1595000 out.go:352] Setting JSON to true
	I0912 21:44:45.492429 1595000 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":23226,"bootTime":1726154260,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0912 21:44:45.492531 1595000 start.go:139] virtualization:  
	I0912 21:44:45.521060 1595000 out.go:97] [download-only-308645] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0912 21:44:45.521156 1595000 notify.go:220] Checking for updates...
	I0912 21:44:45.542010 1595000 out.go:169] MINIKUBE_LOCATION=19616
	I0912 21:44:45.566086 1595000 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:44:45.600388 1595000 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	I0912 21:44:45.632212 1595000 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	I0912 21:44:45.663210 1595000 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0912 21:44:45.728661 1595000 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 21:44:45.728977 1595000 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 21:44:45.748821 1595000 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 21:44:45.748927 1595000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:44:45.799009 1595000 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 21:44:45.788945743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:44:45.799118 1595000 docker.go:318] overlay module found
	I0912 21:44:45.810497 1595000 out.go:97] Using the docker driver based on user configuration
	I0912 21:44:45.810533 1595000 start.go:297] selected driver: docker
	I0912 21:44:45.810540 1595000 start.go:901] validating driver "docker" against <nil>
	I0912 21:44:45.810671 1595000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:44:45.859728 1595000 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-12 21:44:45.849460026 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 21:44:45.859885 1595000 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0912 21:44:45.860184 1595000 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0912 21:44:45.860379 1595000 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 21:44:45.876327 1595000 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-308645 host does not exist
	  To start a cluster, run: "minikube start -p download-only-308645"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-308645
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-696147 --alsologtostderr --binary-mirror http://127.0.0.1:42489 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-696147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-696147
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestOffline (62.49s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-690166 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-690166 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m0.432023852s)
helpers_test.go:175: Cleaning up "offline-docker-690166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-690166
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-690166: (2.059283203s)
--- PASS: TestOffline (62.49s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-648158
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-648158: exit status 85 (62.793252ms)

                                                
                                                
-- stdout --
	* Profile "addons-648158" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-648158"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-648158
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-648158: exit status 85 (64.635587ms)

                                                
                                                
-- stdout --
	* Profile "addons-648158" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-648158"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (221.49s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-648158 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-648158 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m41.49275084s)
--- PASS: TestAddons/Setup (221.49s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.02s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 44.805774ms
addons_test.go:897: volcano-scheduler stabilized in 44.900147ms
addons_test.go:905: volcano-admission stabilized in 44.937184ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-8jvxk" [ad3f4a92-5361-4ef3-807e-be53f29a012e] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00974232s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-r7sfh" [0b53c1a5-3052-433d-a016-c5fd57df10b1] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.00429762s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-fldhg" [dc6c6826-383a-4573-b985-cedffac75344] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003647767s
addons_test.go:932: (dbg) Run:  kubectl --context addons-648158 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-648158 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-648158 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b7612d9d-1696-42ce-a890-a305d76e0838] Pending
helpers_test.go:344: "test-job-nginx-0" [b7612d9d-1696-42ce-a890-a305d76e0838] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b7612d9d-1696-42ce-a890-a305d76e0838] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004041685s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-648158 addons disable volcano --alsologtostderr -v=1: (10.372414292s)
--- PASS: TestAddons/serial/Volcano (40.02s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-648158 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-648158 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-648158 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-648158 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-648158 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5a35f7ac-57ad-42d4-a9d2-6a442c1829cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5a35f7ac-57ad-42d4-a9d2-6a442c1829cd] Running
2024/09/12 21:58:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004823619s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-648158 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-648158 addons disable ingress-dns --alsologtostderr -v=1: (1.864801829s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-648158 addons disable ingress --alsologtostderr -v=1: (7.75846568s)
--- PASS: TestAddons/parallel/Ingress (19.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-shtcf" [330e50fa-cf9b-4d26-b9af-6f1e06bcf799] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003966909s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-648158
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-648158: (5.710732516s)
--- PASS: TestAddons/parallel/InspektorGadget (11.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.72s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.481937ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-k2dzp" [eb6c8928-90e8-498f-9bc2-1e0d328da8dd] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003238439s
addons_test.go:417: (dbg) Run:  kubectl --context addons-648158 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.947136ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-648158 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-648158 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [407af284-d9dd-4684-aff8-8ac22c6ebc99] Pending
helpers_test.go:344: "task-pv-pod" [407af284-d9dd-4684-aff8-8ac22c6ebc99] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [407af284-d9dd-4684-aff8-8ac22c6ebc99] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004200377s
addons_test.go:590: (dbg) Run:  kubectl --context addons-648158 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-648158 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-648158 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-648158 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-648158 delete pod task-pv-pod: (1.057811136s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-648158 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-648158 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-648158 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9a11ba54-18ab-49bd-84c2-35daccd7bf74] Pending
helpers_test.go:344: "task-pv-pod-restore" [9a11ba54-18ab-49bd-84c2-35daccd7bf74] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9a11ba54-18ab-49bd-84c2-35daccd7bf74] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004427965s
addons_test.go:632: (dbg) Run:  kubectl --context addons-648158 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-648158 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-648158 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-648158 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.741626905s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.16s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-648158 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-9pj97" [ddad1a9b-a85f-45a5-92f9-a1c94e8d1943] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-9pj97" [ddad1a9b-a85f-45a5-92f9-a1c94e8d1943] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-9pj97" [ddad1a9b-a85f-45a5-92f9-a1c94e8d1943] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003061401s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-648158 addons disable headlamp --alsologtostderr -v=1: (5.730843353s)
--- PASS: TestAddons/parallel/Headlamp (18.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-cdpm7" [4199ddb5-bc38-4594-988a-5d623880015d] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003587717s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-648158
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-648158 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-648158 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-648158 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [abf487e5-1d16-4945-aef6-a3173ba8c084] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [abf487e5-1d16-4945-aef6-a3173ba8c084] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [abf487e5-1d16-4945-aef6-a3173ba8c084] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004016898s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-648158 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 ssh "cat /opt/local-path-provisioner/pvc-f35533b4-afcc-4dc0-ac78-6e7f43f1c177_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-648158 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-648158 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-648158 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.317179198s)
--- PASS: TestAddons/parallel/LocalPath (53.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-z4pwc" [c1e3c33a-ac28-4943-aad4-27c2cbb14eef] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003551543s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-648158
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-n5gz7" [b764f266-4937-4d62-892f-85e660909e38] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004024114s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-648158 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-648158 addons disable yakd --alsologtostderr -v=1: (5.682261479s)
--- PASS: TestAddons/parallel/Yakd (11.69s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-648158
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-648158: (5.752420182s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-648158
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-648158
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-648158
--- PASS: TestAddons/StoppedEnableDisable (6.01s)

                                                
                                    
x
+
TestCertOptions (35.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-313111 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-313111 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (32.740915826s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-313111 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-313111 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-313111 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-313111" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-313111
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-313111: (2.129195676s)
--- PASS: TestCertOptions (35.54s)

                                                
                                    
x
+
TestCertExpiration (248.71s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-912532 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
E0912 22:37:44.515239 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-912532 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (42.491208614s)
E0912 22:38:33.784234 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-912532 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-912532 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (24.100727728s)
helpers_test.go:175: Cleaning up "cert-expiration-912532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-912532
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-912532: (2.114227005s)
--- PASS: TestCertExpiration (248.71s)

                                                
                                    
x
+
TestDockerFlags (45s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-325969 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-325969 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.727070348s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-325969 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-325969 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-325969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-325969
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-325969: (2.416203041s)
--- PASS: TestDockerFlags (45.00s)

                                                
                                    
x
+
TestForceSystemdFlag (44.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-011332 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-011332 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.51211604s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-011332 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-011332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-011332
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-011332: (2.650611656s)
--- PASS: TestForceSystemdFlag (44.58s)

                                                
                                    
x
+
TestForceSystemdEnv (45.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-870763 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-870763 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (42.573930519s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-870763 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-870763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-870763
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-870763: (2.277304769s)
--- PASS: TestForceSystemdEnv (45.33s)

                                                
                                    
x
+
TestErrorSpam/setup (31.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-914358 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-914358 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-914358 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-914358 --driver=docker  --container-runtime=docker: (31.980924438s)
--- PASS: TestErrorSpam/setup (31.98s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 pause
--- PASS: TestErrorSpam/pause (1.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 unpause
--- PASS: TestErrorSpam/unpause (1.49s)

                                                
                                    
x
+
TestErrorSpam/stop (11.04s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 stop: (10.838105751s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-914358 --log_dir /tmp/nospam-914358 stop
--- PASS: TestErrorSpam/stop (11.04s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19616-1589418/.minikube/files/etc/test/nested/copy/1594794/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-537030 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-537030 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (45.489585646s)
--- PASS: TestFunctional/serial/StartWithProxy (45.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.84s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-537030 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-537030 --alsologtostderr -v=8: (36.834102907s)
functional_test.go:663: soft start took 36.835313963s for "functional-537030" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.84s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-537030 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-537030 cache add registry.k8s.io/pause:3.1: (1.120509671s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-537030 cache add registry.k8s.io/pause:3.3: (1.152684348s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-537030 cache add registry.k8s.io/pause:latest: (1.062657379s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-537030 /tmp/TestFunctionalserialCacheCmdcacheadd_local431740662/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 cache add minikube-local-cache-test:functional-537030
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 cache delete minikube-local-cache-test:functional-537030
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-537030
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-537030 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.83249ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 kubectl -- --context functional-537030 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-537030 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.44s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-537030 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-537030 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.434746919s)
functional_test.go:761: restart took 41.434857949s for "functional-537030" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.44s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-537030 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-537030 logs: (1.168120006s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 logs --file /tmp/TestFunctionalserialLogsFileCmd3388437718/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-537030 logs --file /tmp/TestFunctionalserialLogsFileCmd3388437718/001/logs.txt: (1.18510524s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.57s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-537030 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-537030
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-537030: exit status 115 (704.583804ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30861 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-537030 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-537030 config get cpus: exit status 14 (85.177837ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-537030 config get cpus: exit status 14 (55.899075ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-537030 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-537030 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1636331: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-537030 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-537030 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (222.306594ms)

                                                
                                                
-- stdout --
	* [functional-537030] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:03:17.463097 1636025 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:03:17.463227 1636025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:17.463233 1636025 out.go:358] Setting ErrFile to fd 2...
	I0912 22:03:17.463238 1636025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:17.463594 1636025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
	I0912 22:03:17.464023 1636025 out.go:352] Setting JSON to false
	I0912 22:03:17.469363 1636025 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24338,"bootTime":1726154260,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0912 22:03:17.469444 1636025 start.go:139] virtualization:  
	I0912 22:03:17.471387 1636025 out.go:177] * [functional-537030] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0912 22:03:17.472548 1636025 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:03:17.472662 1636025 notify.go:220] Checking for updates...
	I0912 22:03:17.477433 1636025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:03:17.478674 1636025 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	I0912 22:03:17.480132 1636025 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	I0912 22:03:17.481248 1636025 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0912 22:03:17.482476 1636025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:03:17.483954 1636025 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:03:17.484522 1636025 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:03:17.525652 1636025 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 22:03:17.525969 1636025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:03:17.607150 1636025 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-12 22:03:17.59638688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:03:17.607262 1636025 docker.go:318] overlay module found
	I0912 22:03:17.608926 1636025 out.go:177] * Using the docker driver based on existing profile
	I0912 22:03:17.610167 1636025 start.go:297] selected driver: docker
	I0912 22:03:17.610202 1636025 start.go:901] validating driver "docker" against &{Name:functional-537030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-537030 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:03:17.610313 1636025 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:03:17.612190 1636025 out.go:201] 
	W0912 22:03:17.613523 1636025 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 22:03:17.614756 1636025 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-537030 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-537030 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-537030 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (206.878723ms)

                                                
                                                
-- stdout --
	* [functional-537030] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:03:17.253207 1635942 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:03:17.253328 1635942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:17.253333 1635942 out.go:358] Setting ErrFile to fd 2...
	I0912 22:03:17.253338 1635942 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:03:17.255144 1635942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
	I0912 22:03:17.255530 1635942 out.go:352] Setting JSON to false
	I0912 22:03:17.256566 1635942 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":24338,"bootTime":1726154260,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0912 22:03:17.256646 1635942 start.go:139] virtualization:  
	I0912 22:03:17.258701 1635942 out.go:177] * [functional-537030] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0912 22:03:17.260296 1635942 out.go:177]   - MINIKUBE_LOCATION=19616
	I0912 22:03:17.260418 1635942 notify.go:220] Checking for updates...
	I0912 22:03:17.262935 1635942 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:03:17.264374 1635942 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	I0912 22:03:17.265895 1635942 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	I0912 22:03:17.267380 1635942 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0912 22:03:17.268784 1635942 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:03:17.270633 1635942 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:03:17.271135 1635942 driver.go:394] Setting default libvirt URI to qemu:///system
	I0912 22:03:17.296162 1635942 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0912 22:03:17.296277 1635942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:03:17.381690 1635942 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-12 22:03:17.371143781 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:03:17.381803 1635942 docker.go:318] overlay module found
	I0912 22:03:17.383364 1635942 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0912 22:03:17.384394 1635942 start.go:297] selected driver: docker
	I0912 22:03:17.384407 1635942 start.go:901] validating driver "docker" against &{Name:functional-537030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726156396-19616@sha256:66b06a42534e914a5c8ad765d7508a93a34031939ec9a6b3a818ef0a444ff889 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-537030 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0912 22:03:17.384521 1635942 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:03:17.389818 1635942 out.go:201] 
	W0912 22:03:17.390912 1635942 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 22:03:17.392086 1635942 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-537030 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-537030 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-g9r59" [4cafb006-9b45-4d57-bc2b-b2c8d8e2358e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-g9r59" [4cafb006-9b45-4d57-bc2b-b2c8d8e2358e] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003274502s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32086
functional_test.go:1675: http://192.168.49.2:32086: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-g9r59

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32086
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b526e045-6098-4250-ac2c-148ce52186a6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003172797s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-537030 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-537030 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-537030 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-537030 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8a9d778b-8501-4d56-82a7-2339ad6a51d4] Pending
helpers_test.go:344: "sp-pod" [8a9d778b-8501-4d56-82a7-2339ad6a51d4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8a9d778b-8501-4d56-82a7-2339ad6a51d4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003591478s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-537030 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-537030 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-537030 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb463f67-4753-4a3c-bbfa-f85458d177ee] Pending
helpers_test.go:344: "sp-pod" [bb463f67-4753-4a3c-bbfa-f85458d177ee] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb463f67-4753-4a3c-bbfa-f85458d177ee] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004098327s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-537030 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.96s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh -n functional-537030 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 cp functional-537030:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1059632600/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh -n functional-537030 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh -n functional-537030 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1594794/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo cat /etc/test/nested/copy/1594794/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1594794.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo cat /etc/ssl/certs/1594794.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1594794.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo cat /usr/share/ca-certificates/1594794.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/15947942.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo cat /etc/ssl/certs/15947942.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/15947942.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo cat /usr/share/ca-certificates/15947942.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-537030 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo systemctl is-active crio"
2024/09/12 22:03:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-537030 ssh "sudo systemctl is-active crio": exit status 1 (305.081593ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-537030 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-537030 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-537030 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-537030 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1633185: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-537030 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-537030 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b99e5dbe-7327-418a-aa63-a537443566ee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b99e5dbe-7327-418a-aa63-a537443566ee] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004210546s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-537030 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.246.143 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-537030 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-537030 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-537030 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-v9bfq" [585cf7b4-7960-4c47-b14e-6361b6893153] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-v9bfq" [585cf7b4-7960-4c47-b14e-6361b6893153] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004073957s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "372.477744ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "97.4288ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 service list -o json
functional_test.go:1494: Took "579.664594ms" to run "out/minikube-linux-arm64 -p functional-537030 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "386.056518ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "69.349755ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30782
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdany-port3284246390/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726178594488913518" to /tmp/TestFunctionalparallelMountCmdany-port3284246390/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726178594488913518" to /tmp/TestFunctionalparallelMountCmdany-port3284246390/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726178594488913518" to /tmp/TestFunctionalparallelMountCmdany-port3284246390/001/test-1726178594488913518
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (491.357985ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 12 22:03 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 12 22:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 12 22:03 test-1726178594488913518
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh cat /mount-9p/test-1726178594488913518
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-537030 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [315d1879-d1f1-4b46-8468-c2ec679e43b7] Pending
helpers_test.go:344: "busybox-mount" [315d1879-d1f1-4b46-8468-c2ec679e43b7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [315d1879-d1f1-4b46-8468-c2ec679e43b7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [315d1879-d1f1-4b46-8468-c2ec679e43b7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003718082s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-537030 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdany-port3284246390/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30782
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdspecific-port1348200304/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (459.926774ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdspecific-port1348200304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-537030 ssh "sudo umount -f /mount-9p": exit status 1 (343.415223ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-537030 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdspecific-port1348200304/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdVerifyCleanup398711389/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdVerifyCleanup398711389/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdVerifyCleanup398711389/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T" /mount1: exit status 1 (865.437867ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-537030 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdVerifyCleanup398711389/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdVerifyCleanup398711389/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-537030 /tmp/TestFunctionalparallelMountCmdVerifyCleanup398711389/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-537030 version -o=json --components: (1.157951378s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-537030 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-537030
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-537030
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-537030 image ls --format short --alsologtostderr:
I0912 22:03:34.889656 1639236 out.go:345] Setting OutFile to fd 1 ...
I0912 22:03:34.889796 1639236 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:34.889806 1639236 out.go:358] Setting ErrFile to fd 2...
I0912 22:03:34.889812 1639236 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:34.890084 1639236 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
I0912 22:03:34.890717 1639236 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:34.890848 1639236 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:34.891369 1639236 cli_runner.go:164] Run: docker container inspect functional-537030 --format={{.State.Status}}
I0912 22:03:34.920023 1639236 ssh_runner.go:195] Run: systemctl --version
I0912 22:03:34.920078 1639236 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-537030
I0912 22:03:34.951187 1639236 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34340 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/functional-537030/id_rsa Username:docker}
I0912 22:03:35.053586 1639236 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-537030 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-537030 | ce2d2cda2d858 | 4.78MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-537030 | b015f12255e74 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-537030 image ls --format table --alsologtostderr:
I0912 22:03:35.400430 1639391 out.go:345] Setting OutFile to fd 1 ...
I0912 22:03:35.400557 1639391 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:35.400614 1639391 out.go:358] Setting ErrFile to fd 2...
I0912 22:03:35.400620 1639391 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:35.400858 1639391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
I0912 22:03:35.401512 1639391 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:35.401640 1639391 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:35.402179 1639391 cli_runner.go:164] Run: docker container inspect functional-537030 --format={{.State.Status}}
I0912 22:03:35.418967 1639391 ssh_runner.go:195] Run: systemctl --version
I0912 22:03:35.419024 1639391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-537030
I0912 22:03:35.438864 1639391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34340 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/functional-537030/id_rsa Username:docker}
I0912 22:03:35.548051 1639391 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0912 22:03:36.367395 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-537030 image ls --format json --alsologtostderr:
[{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-537030"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"b015f12255e743d4012921a5315e9c010c1af1fe6997929ed308cd81263c3847","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-537030"],"size":"30"},{"id":"d3f53a98c0a9d91
63c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","rep
oDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"
size":"484000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-537030 image ls --format json --alsologtostderr:
I0912 22:03:35.178832 1639308 out.go:345] Setting OutFile to fd 1 ...
I0912 22:03:35.179060 1639308 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:35.179091 1639308 out.go:358] Setting ErrFile to fd 2...
I0912 22:03:35.179113 1639308 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:35.179399 1639308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
I0912 22:03:35.180112 1639308 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:35.180295 1639308 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:35.180889 1639308 cli_runner.go:164] Run: docker container inspect functional-537030 --format={{.State.Status}}
I0912 22:03:35.199469 1639308 ssh_runner.go:195] Run: systemctl --version
I0912 22:03:35.199517 1639308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-537030
I0912 22:03:35.219076 1639308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34340 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/functional-537030/id_rsa Username:docker}
I0912 22:03:35.313286 1639308 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls --format yaml --alsologtostderr
E0912 22:03:35.085324 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-537030 image ls --format yaml --alsologtostderr:
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: b015f12255e743d4012921a5315e9c010c1af1fe6997929ed308cd81263c3847
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-537030
size: "30"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-537030
size: "4780000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-537030 image ls --format yaml --alsologtostderr:
I0912 22:03:34.901624 1639237 out.go:345] Setting OutFile to fd 1 ...
I0912 22:03:34.901831 1639237 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:34.901858 1639237 out.go:358] Setting ErrFile to fd 2...
I0912 22:03:34.901879 1639237 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:34.902134 1639237 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
I0912 22:03:34.902780 1639237 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:34.902965 1639237 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:34.903494 1639237 cli_runner.go:164] Run: docker container inspect functional-537030 --format={{.State.Status}}
I0912 22:03:34.925668 1639237 ssh_runner.go:195] Run: systemctl --version
I0912 22:03:34.925723 1639237 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-537030
I0912 22:03:34.962685 1639237 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34340 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/functional-537030/id_rsa Username:docker}
I0912 22:03:35.061872 1639237 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-537030 ssh pgrep buildkitd: exit status 1 (344.783109ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image build -t localhost/my-image:functional-537030 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-537030 image build -t localhost/my-image:functional-537030 testdata/build --alsologtostderr: (2.897001297s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-537030 image build -t localhost/my-image:functional-537030 testdata/build --alsologtostderr:
I0912 22:03:35.513579 1639414 out.go:345] Setting OutFile to fd 1 ...
I0912 22:03:35.514224 1639414 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:35.514236 1639414 out.go:358] Setting ErrFile to fd 2...
I0912 22:03:35.514241 1639414 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0912 22:03:35.514482 1639414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
I0912 22:03:35.515106 1639414 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:35.516166 1639414 config.go:182] Loaded profile config "functional-537030": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0912 22:03:35.516711 1639414 cli_runner.go:164] Run: docker container inspect functional-537030 --format={{.State.Status}}
I0912 22:03:35.533801 1639414 ssh_runner.go:195] Run: systemctl --version
I0912 22:03:35.533864 1639414 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-537030
I0912 22:03:35.562834 1639414 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34340 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/functional-537030/id_rsa Username:docker}
I0912 22:03:35.669607 1639414 build_images.go:161] Building image from path: /tmp/build.1184844307.tar
I0912 22:03:35.669681 1639414 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 22:03:35.678616 1639414 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1184844307.tar
I0912 22:03:35.681944 1639414 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1184844307.tar: stat -c "%s %y" /var/lib/minikube/build/build.1184844307.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1184844307.tar': No such file or directory
I0912 22:03:35.681982 1639414 ssh_runner.go:362] scp /tmp/build.1184844307.tar --> /var/lib/minikube/build/build.1184844307.tar (3072 bytes)
I0912 22:03:35.709733 1639414 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1184844307
I0912 22:03:35.718257 1639414 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1184844307 -xf /var/lib/minikube/build/build.1184844307.tar
I0912 22:03:35.727409 1639414 docker.go:360] Building image: /var/lib/minikube/build/build.1184844307
I0912 22:03:35.727491 1639414 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-537030 /var/lib/minikube/build/build.1184844307
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:6209709d49ca2baa10fa2767e0e11bd310c1931687d6ba734e3ad3e4cfa3e32d done
#8 naming to localhost/my-image:functional-537030 done
#8 DONE 0.1s
I0912 22:03:38.312283 1639414 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-537030 /var/lib/minikube/build/build.1184844307: (2.584766149s)
I0912 22:03:38.312350 1639414 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1184844307
I0912 22:03:38.321829 1639414 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1184844307.tar
I0912 22:03:38.330535 1639414 build_images.go:217] Built localhost/my-image:functional-537030 from /tmp/build.1184844307.tar
I0912 22:03:38.330563 1639414 build_images.go:133] succeeded building to: functional-537030
I0912 22:03:38.330568 1639414 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-537030
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 update-context --alsologtostderr -v=2
E0912 22:03:34.118904 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 update-context --alsologtostderr -v=2
E0912 22:03:33.875212 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:03:33.956851 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-537030 docker-env) && out/minikube-linux-arm64 status -p functional-537030"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-537030 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image load --daemon kicbase/echo-server:functional-537030 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image load --daemon kicbase/echo-server:functional-537030 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-537030
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image load --daemon kicbase/echo-server:functional-537030 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image save kicbase/echo-server:functional-537030 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image rm kicbase/echo-server:functional-537030 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
E0912 22:03:33.784041 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:03:33.790959 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:03:33.802425 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:03:33.829456 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image ls
E0912 22:03:34.441279 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-537030
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-537030 image save --daemon kicbase/echo-server:functional-537030 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-537030
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-537030
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-537030
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-537030
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (121.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-382956 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0912 22:03:44.050645 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:03:54.291899 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:04:14.773857 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:04:55.735743 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-382956 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m1.011654822s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (121.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (41.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- rollout status deployment/busybox
E0912 22:06:17.659382 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-382956 -- rollout status deployment/busybox: (38.33948669s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-8f24c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-bpvpg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-fkd7l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-8f24c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-bpvpg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-fkd7l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-8f24c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-bpvpg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-fkd7l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (41.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-8f24c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-8f24c -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-bpvpg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-bpvpg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-fkd7l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-382956 -- exec busybox-7dff88458-fkd7l -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (28.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-382956 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-382956 -v=7 --alsologtostderr: (27.003009705s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr: (1.048952726s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (28.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-382956 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-382956 status --output json -v=7 --alsologtostderr: (1.010643259s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp testdata/cp-test.txt ha-382956:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile510636164/001/cp-test_ha-382956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956:/home/docker/cp-test.txt ha-382956-m02:/home/docker/cp-test_ha-382956_ha-382956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m02 "sudo cat /home/docker/cp-test_ha-382956_ha-382956-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956:/home/docker/cp-test.txt ha-382956-m03:/home/docker/cp-test_ha-382956_ha-382956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m03 "sudo cat /home/docker/cp-test_ha-382956_ha-382956-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956:/home/docker/cp-test.txt ha-382956-m04:/home/docker/cp-test_ha-382956_ha-382956-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m04 "sudo cat /home/docker/cp-test_ha-382956_ha-382956-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp testdata/cp-test.txt ha-382956-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile510636164/001/cp-test_ha-382956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m02:/home/docker/cp-test.txt ha-382956:/home/docker/cp-test_ha-382956-m02_ha-382956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956 "sudo cat /home/docker/cp-test_ha-382956-m02_ha-382956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m02:/home/docker/cp-test.txt ha-382956-m03:/home/docker/cp-test_ha-382956-m02_ha-382956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m03 "sudo cat /home/docker/cp-test_ha-382956-m02_ha-382956-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m02:/home/docker/cp-test.txt ha-382956-m04:/home/docker/cp-test_ha-382956-m02_ha-382956-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m04 "sudo cat /home/docker/cp-test_ha-382956-m02_ha-382956-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp testdata/cp-test.txt ha-382956-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile510636164/001/cp-test_ha-382956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m03:/home/docker/cp-test.txt ha-382956:/home/docker/cp-test_ha-382956-m03_ha-382956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956 "sudo cat /home/docker/cp-test_ha-382956-m03_ha-382956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m03:/home/docker/cp-test.txt ha-382956-m02:/home/docker/cp-test_ha-382956-m03_ha-382956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m02 "sudo cat /home/docker/cp-test_ha-382956-m03_ha-382956-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m03:/home/docker/cp-test.txt ha-382956-m04:/home/docker/cp-test_ha-382956-m03_ha-382956-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m04 "sudo cat /home/docker/cp-test_ha-382956-m03_ha-382956-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp testdata/cp-test.txt ha-382956-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile510636164/001/cp-test_ha-382956-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m04:/home/docker/cp-test.txt ha-382956:/home/docker/cp-test_ha-382956-m04_ha-382956.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956 "sudo cat /home/docker/cp-test_ha-382956-m04_ha-382956.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m04:/home/docker/cp-test.txt ha-382956-m02:/home/docker/cp-test_ha-382956-m04_ha-382956-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m02 "sudo cat /home/docker/cp-test_ha-382956-m04_ha-382956-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 cp ha-382956-m04:/home/docker/cp-test.txt ha-382956-m03:/home/docker/cp-test_ha-382956-m04_ha-382956-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 ssh -n ha-382956-m03 "sudo cat /home/docker/cp-test_ha-382956-m04_ha-382956-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-382956 node stop m02 -v=7 --alsologtostderr: (11.048274929s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr: exit status 7 (735.336544ms)

                                                
                                                
-- stdout --
	ha-382956
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-382956-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-382956-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-382956-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:07:25.908347 1662115 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:07:25.908543 1662115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:07:25.908555 1662115 out.go:358] Setting ErrFile to fd 2...
	I0912 22:07:25.908560 1662115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:07:25.908825 1662115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
	I0912 22:07:25.909073 1662115 out.go:352] Setting JSON to false
	I0912 22:07:25.909099 1662115 mustload.go:65] Loading cluster: ha-382956
	I0912 22:07:25.909322 1662115 notify.go:220] Checking for updates...
	I0912 22:07:25.909524 1662115 config.go:182] Loaded profile config "ha-382956": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:07:25.909545 1662115 status.go:255] checking status of ha-382956 ...
	I0912 22:07:25.910370 1662115 cli_runner.go:164] Run: docker container inspect ha-382956 --format={{.State.Status}}
	I0912 22:07:25.932245 1662115 status.go:330] ha-382956 host status = "Running" (err=<nil>)
	I0912 22:07:25.932273 1662115 host.go:66] Checking if "ha-382956" exists ...
	I0912 22:07:25.932583 1662115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-382956
	I0912 22:07:25.965413 1662115 host.go:66] Checking if "ha-382956" exists ...
	I0912 22:07:25.965720 1662115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:07:25.965777 1662115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-382956
	I0912 22:07:25.985179 1662115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34345 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/ha-382956/id_rsa Username:docker}
	I0912 22:07:26.086308 1662115 ssh_runner.go:195] Run: systemctl --version
	I0912 22:07:26.091002 1662115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:07:26.102301 1662115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:07:26.166624 1662115 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-12 22:07:26.156753286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:07:26.167218 1662115 kubeconfig.go:125] found "ha-382956" server: "https://192.168.49.254:8443"
	I0912 22:07:26.167249 1662115 api_server.go:166] Checking apiserver status ...
	I0912 22:07:26.167297 1662115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:07:26.179289 1662115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2207/cgroup
	I0912 22:07:26.188633 1662115 api_server.go:182] apiserver freezer: "11:freezer:/docker/0edbf4a66b8e4a28922a3d591f92b055799cb13df26039e87b85f25c35ae2fa0/kubepods/burstable/poda84abf95d3b133d74e00f39454ae7aea/f9ab5960c18d35be7cf5ccb43bd9975a8096157eb32882ac724966139fd2229b"
	I0912 22:07:26.188711 1662115 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0edbf4a66b8e4a28922a3d591f92b055799cb13df26039e87b85f25c35ae2fa0/kubepods/burstable/poda84abf95d3b133d74e00f39454ae7aea/f9ab5960c18d35be7cf5ccb43bd9975a8096157eb32882ac724966139fd2229b/freezer.state
	I0912 22:07:26.197640 1662115 api_server.go:204] freezer state: "THAWED"
	I0912 22:07:26.197666 1662115 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0912 22:07:26.205498 1662115 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0912 22:07:26.205529 1662115 status.go:422] ha-382956 apiserver status = Running (err=<nil>)
	I0912 22:07:26.205540 1662115 status.go:257] ha-382956 status: &{Name:ha-382956 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:07:26.205558 1662115 status.go:255] checking status of ha-382956-m02 ...
	I0912 22:07:26.205871 1662115 cli_runner.go:164] Run: docker container inspect ha-382956-m02 --format={{.State.Status}}
	I0912 22:07:26.222877 1662115 status.go:330] ha-382956-m02 host status = "Stopped" (err=<nil>)
	I0912 22:07:26.222903 1662115 status.go:343] host is not running, skipping remaining checks
	I0912 22:07:26.222910 1662115 status.go:257] ha-382956-m02 status: &{Name:ha-382956-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:07:26.222942 1662115 status.go:255] checking status of ha-382956-m03 ...
	I0912 22:07:26.223269 1662115 cli_runner.go:164] Run: docker container inspect ha-382956-m03 --format={{.State.Status}}
	I0912 22:07:26.239694 1662115 status.go:330] ha-382956-m03 host status = "Running" (err=<nil>)
	I0912 22:07:26.239723 1662115 host.go:66] Checking if "ha-382956-m03" exists ...
	I0912 22:07:26.240021 1662115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-382956-m03
	I0912 22:07:26.256712 1662115 host.go:66] Checking if "ha-382956-m03" exists ...
	I0912 22:07:26.257140 1662115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:07:26.257199 1662115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-382956-m03
	I0912 22:07:26.274827 1662115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34355 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/ha-382956-m03/id_rsa Username:docker}
	I0912 22:07:26.370657 1662115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:07:26.384010 1662115 kubeconfig.go:125] found "ha-382956" server: "https://192.168.49.254:8443"
	I0912 22:07:26.384043 1662115 api_server.go:166] Checking apiserver status ...
	I0912 22:07:26.384092 1662115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:07:26.395536 1662115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2244/cgroup
	I0912 22:07:26.404874 1662115 api_server.go:182] apiserver freezer: "11:freezer:/docker/601fccbad12d4650aee263331f68f135457991349debcb814e11e0fd49c30a15/kubepods/burstable/pod27d2a231a466d59f47ddaeb8b69530b7/fba47ec1722f966f283483314ed2713a463ac16217d720cdd5598babe0e31fd9"
	I0912 22:07:26.404945 1662115 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/601fccbad12d4650aee263331f68f135457991349debcb814e11e0fd49c30a15/kubepods/burstable/pod27d2a231a466d59f47ddaeb8b69530b7/fba47ec1722f966f283483314ed2713a463ac16217d720cdd5598babe0e31fd9/freezer.state
	I0912 22:07:26.413627 1662115 api_server.go:204] freezer state: "THAWED"
	I0912 22:07:26.413660 1662115 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0912 22:07:26.421677 1662115 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0912 22:07:26.421709 1662115 status.go:422] ha-382956-m03 apiserver status = Running (err=<nil>)
	I0912 22:07:26.421721 1662115 status.go:257] ha-382956-m03 status: &{Name:ha-382956-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:07:26.421769 1662115 status.go:255] checking status of ha-382956-m04 ...
	I0912 22:07:26.422114 1662115 cli_runner.go:164] Run: docker container inspect ha-382956-m04 --format={{.State.Status}}
	I0912 22:07:26.440304 1662115 status.go:330] ha-382956-m04 host status = "Running" (err=<nil>)
	I0912 22:07:26.440329 1662115 host.go:66] Checking if "ha-382956-m04" exists ...
	I0912 22:07:26.440625 1662115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-382956-m04
	I0912 22:07:26.457448 1662115 host.go:66] Checking if "ha-382956-m04" exists ...
	I0912 22:07:26.457809 1662115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:07:26.457862 1662115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-382956-m04
	I0912 22:07:26.477386 1662115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34360 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/ha-382956-m04/id_rsa Username:docker}
	I0912 22:07:26.578765 1662115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:07:26.593758 1662115 status.go:257] ha-382956-m04 status: &{Name:ha-382956-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (30.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 node start m02 -v=7 --alsologtostderr
E0912 22:07:44.515215 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:44.522527 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:44.533909 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:44.556015 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:44.597339 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:44.678880 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:44.840311 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:45.161550 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:45.803517 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:47.084873 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:49.646837 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:07:54.768745 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-382956 node start m02 -v=7 --alsologtostderr: (29.676024922s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr: (1.009973105s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (30.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0912 22:08:05.010651 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (16.210843701s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (16.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (224.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-382956 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-382956 -v=7 --alsologtostderr
E0912 22:08:25.501271 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:08:33.783494 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-382956 -v=7 --alsologtostderr: (34.236483817s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-382956 --wait=true -v=7 --alsologtostderr
E0912 22:09:01.501527 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:09:06.462693 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:10:28.385174 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-382956 --wait=true -v=7 --alsologtostderr: (3m10.126013486s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-382956
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (224.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-382956 node delete m03 -v=7 --alsologtostderr: (10.213749367s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (33.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-382956 stop -v=7 --alsologtostderr: (32.911810826s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr: exit status 7 (125.229084ms)

                                                
                                                
-- stdout --
	ha-382956
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-382956-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-382956-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:12:43.455621 1689580 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:12:43.455834 1689580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:12:43.455861 1689580 out.go:358] Setting ErrFile to fd 2...
	I0912 22:12:43.455881 1689580 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:12:43.456202 1689580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
	I0912 22:12:43.456440 1689580 out.go:352] Setting JSON to false
	I0912 22:12:43.456527 1689580 mustload.go:65] Loading cluster: ha-382956
	I0912 22:12:43.456616 1689580 notify.go:220] Checking for updates...
	I0912 22:12:43.457110 1689580 config.go:182] Loaded profile config "ha-382956": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:12:43.457133 1689580 status.go:255] checking status of ha-382956 ...
	I0912 22:12:43.457987 1689580 cli_runner.go:164] Run: docker container inspect ha-382956 --format={{.State.Status}}
	I0912 22:12:43.476423 1689580 status.go:330] ha-382956 host status = "Stopped" (err=<nil>)
	I0912 22:12:43.476447 1689580 status.go:343] host is not running, skipping remaining checks
	I0912 22:12:43.476454 1689580 status.go:257] ha-382956 status: &{Name:ha-382956 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:12:43.476514 1689580 status.go:255] checking status of ha-382956-m02 ...
	I0912 22:12:43.476875 1689580 cli_runner.go:164] Run: docker container inspect ha-382956-m02 --format={{.State.Status}}
	I0912 22:12:43.508641 1689580 status.go:330] ha-382956-m02 host status = "Stopped" (err=<nil>)
	I0912 22:12:43.508669 1689580 status.go:343] host is not running, skipping remaining checks
	I0912 22:12:43.508678 1689580 status.go:257] ha-382956-m02 status: &{Name:ha-382956-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:12:43.508697 1689580 status.go:255] checking status of ha-382956-m04 ...
	I0912 22:12:43.509107 1689580 cli_runner.go:164] Run: docker container inspect ha-382956-m04 --format={{.State.Status}}
	I0912 22:12:43.529949 1689580 status.go:330] ha-382956-m04 host status = "Stopped" (err=<nil>)
	I0912 22:12:43.529976 1689580 status.go:343] host is not running, skipping remaining checks
	I0912 22:12:43.529984 1689580 status.go:257] ha-382956-m04 status: &{Name:ha-382956-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (33.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (87.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-382956 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0912 22:12:44.515841 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:13:12.227169 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:13:33.784082 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-382956 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (1m26.974891835s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (87.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-382956 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-382956 --control-plane -v=7 --alsologtostderr: (44.95273022s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-382956 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-167196 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-167196 --driver=docker  --container-runtime=docker: (31.30697622s)
--- PASS: TestImageBuild/serial/Setup (31.31s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-167196
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-167196: (1.805931038s)
--- PASS: TestImageBuild/serial/NormalBuild (1.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-167196
image_test.go:99: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-167196: (1.034769819s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-167196
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.91s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-167196
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.76s)

                                                
                                    
x
+
TestJSONOutput/start/Command (76.16s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-502826 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-502826 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m16.153465738s)
--- PASS: TestJSONOutput/start/Command (76.16s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-502826 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-502826 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.94s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-502826 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-502826 --output=json --user=testUser: (10.935678763s)
--- PASS: TestJSONOutput/stop/Command (10.94s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-454588 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-454588 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (86.248149ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"56ddaf04-2c64-4b1e-81e9-a597d7c355e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-454588] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8faea501-d57f-4b8d-b54b-2d1ad05434b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"478e372f-c6a1-44ca-a746-5f495926f3fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d88c05fd-a547-4a30-bebe-3d30eca4aea8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig"}}
	{"specversion":"1.0","id":"34efb228-17ad-44b3-9b3e-2145e5c5b851","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube"}}
	{"specversion":"1.0","id":"fb2f261f-581a-4a13-9031-0b7f79cb6ad9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9c5f5304-06c2-4de3-aa0f-421d08d8cbfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f127d389-c7ac-409d-a5d1-077c7de3e6b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-454588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-454588
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.12s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-242228 --network=
E0912 22:17:44.515559 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-242228 --network=: (34.034672764s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-242228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-242228
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-242228: (2.055930694s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.12s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.8s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-060088 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-060088 --network=bridge: (30.809908333s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-060088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-060088
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-060088: (1.974557642s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.80s)

                                                
                                    
x
+
TestKicExistingNetwork (34.81s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-212435 --network=existing-network
E0912 22:18:33.783315 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-212435 --network=existing-network: (32.629383548s)
helpers_test.go:175: Cleaning up "existing-network-212435" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-212435
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-212435: (2.00928033s)
--- PASS: TestKicExistingNetwork (34.81s)

                                                
                                    
x
+
TestKicCustomSubnet (32.26s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-458211 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-458211 --subnet=192.168.60.0/24: (30.200158475s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-458211 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-458211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-458211
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-458211: (2.036242299s)
--- PASS: TestKicCustomSubnet (32.26s)

                                                
                                    
x
+
TestKicStaticIP (32.86s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-892740 --static-ip=192.168.200.200
E0912 22:19:56.862827 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-892740 --static-ip=192.168.200.200: (30.61742068s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-892740 ip
helpers_test.go:175: Cleaning up "static-ip-892740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-892740
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-892740: (2.08683167s)
--- PASS: TestKicStaticIP (32.86s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-456792 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-456792 --driver=docker  --container-runtime=docker: (30.947072974s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-459393 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-459393 --driver=docker  --container-runtime=docker: (35.530418993s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-456792
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-459393
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-459393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-459393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-459393: (2.116128292s)
helpers_test.go:175: Cleaning up "first-456792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-456792
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-456792: (2.131309537s)
--- PASS: TestMinikubeProfile (72.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-118988 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-118988 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.508536864s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-118988 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-131935 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-131935 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.294193945s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-131935 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-118988 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-118988 --alsologtostderr -v=5: (1.480571204s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-131935 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-131935
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-131935: (1.202473669s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.49s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-131935
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-131935: (7.485992735s)
--- PASS: TestMountStart/serial/RestartStopped (8.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-131935 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (82.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-680694 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0912 22:22:44.515838 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-680694 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m22.167357842s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (82.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (39.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-680694 -- rollout status deployment/busybox: (3.294310865s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0912 22:23:33.784020 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-rzzz7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-vdknt -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-rzzz7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-vdknt -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-rzzz7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-vdknt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (39.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-rzzz7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-rzzz7 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-vdknt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-680694 -- exec busybox-7dff88458-vdknt -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-680694 -v 3 --alsologtostderr
E0912 22:24:07.589347 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-680694 -v 3 --alsologtostderr: (19.814046617s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.60s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-680694 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp testdata/cp-test.txt multinode-680694:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp multinode-680694:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3942004060/001/cp-test_multinode-680694.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp multinode-680694:/home/docker/cp-test.txt multinode-680694-m02:/home/docker/cp-test_multinode-680694_multinode-680694-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m02 "sudo cat /home/docker/cp-test_multinode-680694_multinode-680694-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp multinode-680694:/home/docker/cp-test.txt multinode-680694-m03:/home/docker/cp-test_multinode-680694_multinode-680694-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m03 "sudo cat /home/docker/cp-test_multinode-680694_multinode-680694-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp testdata/cp-test.txt multinode-680694-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp multinode-680694-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3942004060/001/cp-test_multinode-680694-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp multinode-680694-m02:/home/docker/cp-test.txt multinode-680694:/home/docker/cp-test_multinode-680694-m02_multinode-680694.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694 "sudo cat /home/docker/cp-test_multinode-680694-m02_multinode-680694.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp multinode-680694-m02:/home/docker/cp-test.txt multinode-680694-m03:/home/docker/cp-test_multinode-680694-m02_multinode-680694-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m03 "sudo cat /home/docker/cp-test_multinode-680694-m02_multinode-680694-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp testdata/cp-test.txt multinode-680694-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp multinode-680694-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3942004060/001/cp-test_multinode-680694-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp multinode-680694-m03:/home/docker/cp-test.txt multinode-680694:/home/docker/cp-test_multinode-680694-m03_multinode-680694.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694 "sudo cat /home/docker/cp-test_multinode-680694-m03_multinode-680694.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 cp multinode-680694-m03:/home/docker/cp-test.txt multinode-680694-m02:/home/docker/cp-test_multinode-680694-m03_multinode-680694-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 ssh -n multinode-680694-m02 "sudo cat /home/docker/cp-test_multinode-680694-m03_multinode-680694-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-680694 node stop m03: (1.206368932s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-680694 status: exit status 7 (545.149801ms)

                                                
                                                
-- stdout --
	multinode-680694
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-680694-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-680694-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-680694 status --alsologtostderr: exit status 7 (532.050795ms)

                                                
                                                
-- stdout --
	multinode-680694
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-680694-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-680694-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:24:23.379011 1763966 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:24:23.379171 1763966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:24:23.379182 1763966 out.go:358] Setting ErrFile to fd 2...
	I0912 22:24:23.379188 1763966 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:24:23.379426 1763966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
	I0912 22:24:23.379634 1763966 out.go:352] Setting JSON to false
	I0912 22:24:23.379666 1763966 mustload.go:65] Loading cluster: multinode-680694
	I0912 22:24:23.379736 1763966 notify.go:220] Checking for updates...
	I0912 22:24:23.380829 1763966 config.go:182] Loaded profile config "multinode-680694": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:24:23.380852 1763966 status.go:255] checking status of multinode-680694 ...
	I0912 22:24:23.381424 1763966 cli_runner.go:164] Run: docker container inspect multinode-680694 --format={{.State.Status}}
	I0912 22:24:23.401499 1763966 status.go:330] multinode-680694 host status = "Running" (err=<nil>)
	I0912 22:24:23.401530 1763966 host.go:66] Checking if "multinode-680694" exists ...
	I0912 22:24:23.401846 1763966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-680694
	I0912 22:24:23.422966 1763966 host.go:66] Checking if "multinode-680694" exists ...
	I0912 22:24:23.423284 1763966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:24:23.423341 1763966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-680694
	I0912 22:24:23.446659 1763966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34470 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/multinode-680694/id_rsa Username:docker}
	I0912 22:24:23.546777 1763966 ssh_runner.go:195] Run: systemctl --version
	I0912 22:24:23.551613 1763966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:24:23.563213 1763966 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:24:23.625766 1763966 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-12 22:24:23.615616403 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0912 22:24:23.626493 1763966 kubeconfig.go:125] found "multinode-680694" server: "https://192.168.58.2:8443"
	I0912 22:24:23.626526 1763966 api_server.go:166] Checking apiserver status ...
	I0912 22:24:23.626578 1763966 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:24:23.637866 1763966 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2265/cgroup
	I0912 22:24:23.650383 1763966 api_server.go:182] apiserver freezer: "11:freezer:/docker/cbec3a505853b8ba915fc7ec62a66b821aeb6ef6fc05c047112195af9009c1eb/kubepods/burstable/pod12c302afa51b8a96da0083e5ac6c1333/462c5e7d6d487a17bdc19be32380984dd9312bca2ab3c6fdab510dfd75fbe6cd"
	I0912 22:24:23.650471 1763966 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cbec3a505853b8ba915fc7ec62a66b821aeb6ef6fc05c047112195af9009c1eb/kubepods/burstable/pod12c302afa51b8a96da0083e5ac6c1333/462c5e7d6d487a17bdc19be32380984dd9312bca2ab3c6fdab510dfd75fbe6cd/freezer.state
	I0912 22:24:23.663384 1763966 api_server.go:204] freezer state: "THAWED"
	I0912 22:24:23.663416 1763966 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0912 22:24:23.671299 1763966 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0912 22:24:23.671327 1763966 status.go:422] multinode-680694 apiserver status = Running (err=<nil>)
	I0912 22:24:23.671339 1763966 status.go:257] multinode-680694 status: &{Name:multinode-680694 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:24:23.671357 1763966 status.go:255] checking status of multinode-680694-m02 ...
	I0912 22:24:23.671686 1763966 cli_runner.go:164] Run: docker container inspect multinode-680694-m02 --format={{.State.Status}}
	I0912 22:24:23.687806 1763966 status.go:330] multinode-680694-m02 host status = "Running" (err=<nil>)
	I0912 22:24:23.687842 1763966 host.go:66] Checking if "multinode-680694-m02" exists ...
	I0912 22:24:23.688168 1763966 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-680694-m02
	I0912 22:24:23.704834 1763966 host.go:66] Checking if "multinode-680694-m02" exists ...
	I0912 22:24:23.705261 1763966 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:24:23.705310 1763966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-680694-m02
	I0912 22:24:23.722128 1763966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34475 SSHKeyPath:/home/jenkins/minikube-integration/19616-1589418/.minikube/machines/multinode-680694-m02/id_rsa Username:docker}
	I0912 22:24:23.823045 1763966 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:24:23.838202 1763966 status.go:257] multinode-680694-m02 status: &{Name:multinode-680694-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:24:23.838256 1763966 status.go:255] checking status of multinode-680694-m03 ...
	I0912 22:24:23.838573 1763966 cli_runner.go:164] Run: docker container inspect multinode-680694-m03 --format={{.State.Status}}
	I0912 22:24:23.855746 1763966 status.go:330] multinode-680694-m03 host status = "Stopped" (err=<nil>)
	I0912 22:24:23.855770 1763966 status.go:343] host is not running, skipping remaining checks
	I0912 22:24:23.855778 1763966 status.go:257] multinode-680694-m03 status: &{Name:multinode-680694-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-680694 node start m03 -v=7 --alsologtostderr: (10.510561699s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-680694
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-680694
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-680694: (22.58510455s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-680694 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-680694 --wait=true -v=8 --alsologtostderr: (1m15.472482378s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-680694
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-680694 node delete m03: (4.923534362s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-680694 stop: (21.471334471s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-680694 status: exit status 7 (101.82391ms)

                                                
                                                
-- stdout --
	multinode-680694
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-680694-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-680694 status --alsologtostderr: exit status 7 (98.982654ms)

                                                
                                                
-- stdout --
	multinode-680694
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-680694-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:26:40.595615 1777518 out.go:345] Setting OutFile to fd 1 ...
	I0912 22:26:40.595798 1777518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:26:40.595825 1777518 out.go:358] Setting ErrFile to fd 2...
	I0912 22:26:40.595846 1777518 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0912 22:26:40.596136 1777518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19616-1589418/.minikube/bin
	I0912 22:26:40.596368 1777518 out.go:352] Setting JSON to false
	I0912 22:26:40.596413 1777518 mustload.go:65] Loading cluster: multinode-680694
	I0912 22:26:40.596704 1777518 notify.go:220] Checking for updates...
	I0912 22:26:40.596900 1777518 config.go:182] Loaded profile config "multinode-680694": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0912 22:26:40.596936 1777518 status.go:255] checking status of multinode-680694 ...
	I0912 22:26:40.597823 1777518 cli_runner.go:164] Run: docker container inspect multinode-680694 --format={{.State.Status}}
	I0912 22:26:40.615713 1777518 status.go:330] multinode-680694 host status = "Stopped" (err=<nil>)
	I0912 22:26:40.615731 1777518 status.go:343] host is not running, skipping remaining checks
	I0912 22:26:40.615739 1777518 status.go:257] multinode-680694 status: &{Name:multinode-680694 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:26:40.615767 1777518 status.go:255] checking status of multinode-680694-m02 ...
	I0912 22:26:40.616072 1777518 cli_runner.go:164] Run: docker container inspect multinode-680694-m02 --format={{.State.Status}}
	I0912 22:26:40.647932 1777518 status.go:330] multinode-680694-m02 host status = "Stopped" (err=<nil>)
	I0912 22:26:40.647953 1777518 status.go:343] host is not running, skipping remaining checks
	I0912 22:26:40.647960 1777518 status.go:257] multinode-680694-m02 status: &{Name:multinode-680694-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-680694 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-680694 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (55.066474735s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-680694 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.75s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-680694
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-680694-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-680694-m02 --driver=docker  --container-runtime=docker: exit status 14 (86.251736ms)

                                                
                                                
-- stdout --
	* [multinode-680694-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-680694-m02' is duplicated with machine name 'multinode-680694-m02' in profile 'multinode-680694'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-680694-m03 --driver=docker  --container-runtime=docker
E0912 22:27:44.515953 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-680694-m03 --driver=docker  --container-runtime=docker: (34.462666036s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-680694
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-680694: exit status 80 (360.261701ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-680694 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-680694-m03 already exists in multinode-680694-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-680694-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-680694-m03: (2.129261675s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.09s)

                                                
                                    
x
+
TestPreload (142.52s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-283941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0912 22:28:33.783920 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-283941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m41.917260652s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-283941 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-283941 image pull gcr.io/k8s-minikube/busybox: (2.315791828s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-283941
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-283941: (10.922825307s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-283941 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-283941 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (24.849827095s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-283941 image list
helpers_test.go:175: Cleaning up "test-preload-283941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-283941
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-283941: (2.238873717s)
--- PASS: TestPreload (142.52s)

                                                
                                    
x
+
TestScheduledStopUnix (105.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-658809 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-658809 --memory=2048 --driver=docker  --container-runtime=docker: (32.151315644s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-658809 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-658809 -n scheduled-stop-658809
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-658809 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-658809 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-658809 -n scheduled-stop-658809
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-658809
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-658809 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-658809
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-658809: exit status 7 (69.944443ms)

                                                
                                                
-- stdout --
	scheduled-stop-658809
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-658809 -n scheduled-stop-658809
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-658809 -n scheduled-stop-658809: exit status 7 (69.57728ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-658809" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-658809
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-658809: (1.645669654s)
--- PASS: TestScheduledStopUnix (105.37s)

                                                
                                    
x
+
TestSkaffold (114.31s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1493463222 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-614913 --memory=2600 --driver=docker  --container-runtime=docker
E0912 22:32:44.515654 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-614913 --memory=2600 --driver=docker  --container-runtime=docker: (31.196349417s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1493463222 run --minikube-profile skaffold-614913 --kube-context skaffold-614913 --status-check=true --port-forward=false --interactive=false
E0912 22:33:33.784334 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1493463222 run --minikube-profile skaffold-614913 --kube-context skaffold-614913 --status-check=true --port-forward=false --interactive=false: (1m7.710289697s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-57f4f85894-svfc8" [8da48dbe-3ae9-4399-b3e6-4a0e590cc0d8] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003606511s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-75f568b98f-rnd9n" [39537225-109d-4e5a-85bc-8175e88e20c5] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003271564s
helpers_test.go:175: Cleaning up "skaffold-614913" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-614913
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-614913: (3.019848545s)
--- PASS: TestSkaffold (114.31s)

                                                
                                    
x
+
TestInsufficientStorage (10.92s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-873926 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-873926 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.658369644s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"27c40e89-7789-413f-809b-8086ab5a6e4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-873926] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d740a40-b06b-40f6-928b-aefe226b1699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19616"}}
	{"specversion":"1.0","id":"7b70c367-0244-4710-b98d-9ee87eaf467a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0cb12381-3117-46e0-8aeb-8e992a61be95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig"}}
	{"specversion":"1.0","id":"5b10e46c-2668-4fba-bde5-075a9e93fb1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube"}}
	{"specversion":"1.0","id":"94adf6d9-13d3-41af-bc01-9064ccacd8f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c2c068d7-9f0f-4b73-bbbe-7b330d5198e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2ea80557-c245-4ea6-a7b8-2a5e3dacb713","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"08a71f1c-7810-4bb4-be35-4745b9e5a566","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cf05d5ea-3072-4b2e-b086-ad8533d05443","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"92788562-b648-4226-b37c-8d1a9918986f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0c9d5593-4ed9-48bb-bb6f-aef6277e321a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-873926\" primary control-plane node in \"insufficient-storage-873926\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b3dc61f-2337-4e6f-82c6-2d884e9af2ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726156396-19616 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"73b23279-8beb-4ee2-a23d-fa0882d2e528","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"efff17cc-34df-4252-bb08-408ae5e2e13d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-873926 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-873926 --output=json --layout=cluster: exit status 7 (285.895046ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-873926","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-873926","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:34:28.586363 1811776 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-873926" does not appear in /home/jenkins/minikube-integration/19616-1589418/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-873926 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-873926 --output=json --layout=cluster: exit status 7 (283.267734ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-873926","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-873926","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:34:28.870831 1811837 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-873926" does not appear in /home/jenkins/minikube-integration/19616-1589418/kubeconfig
	E0912 22:34:28.881026 1811837 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/insufficient-storage-873926/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-873926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-873926
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-873926: (1.693430519s)
--- PASS: TestInsufficientStorage (10.92s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (101.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2821096836 start -p running-upgrade-220236 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0912 22:44:33.319269 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2821096836 start -p running-upgrade-220236 --memory=2200 --vm-driver=docker  --container-runtime=docker: (42.883404s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-220236 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-220236 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.564716628s)
helpers_test.go:175: Cleaning up "running-upgrade-220236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-220236
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-220236: (3.139995726s)
--- PASS: TestRunningBinaryUpgrade (101.29s)

                                                
                                    
x
+
TestKubernetesUpgrade (371.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-652401 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0912 22:39:05.615340 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:05.621720 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:05.633132 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:05.654543 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:05.695943 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:05.777356 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:05.938904 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:06.260351 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:06.902275 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:08.183801 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:10.745349 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:15.867514 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:39:26.109790 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-652401 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.962782581s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-652401
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-652401: (1.238909156s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-652401 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-652401 status --format={{.Host}}: exit status 7 (77.616861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-652401 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0912 22:39:46.591158 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:40:27.553977 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:40:47.591412 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-652401 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m43.797235218s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-652401 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-652401 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-652401 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (102.07246ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-652401] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-652401
	    minikube start -p kubernetes-upgrade-652401 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6524012 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-652401 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-652401 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-652401 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.776103651s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-652401" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-652401
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-652401: (2.505766893s)
--- PASS: TestKubernetesUpgrade (371.57s)

                                                
                                    
x
+
TestMissingContainerUpgrade (168.63s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.994647989 start -p missing-upgrade-887669 --memory=2200 --driver=docker  --container-runtime=docker
E0912 22:41:49.476037 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:42:44.515432 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.994647989 start -p missing-upgrade-887669 --memory=2200 --driver=docker  --container-runtime=docker: (1m28.788465758s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-887669
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-887669: (10.45872096s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-887669
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-887669 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0912 22:43:33.783903 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:44:05.615217 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-887669 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m5.827045168s)
helpers_test.go:175: Cleaning up "missing-upgrade-887669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-887669
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-887669: (2.503158668s)
--- PASS: TestMissingContainerUpgrade (168.63s)

                                                
                                    
x
+
TestPause/serial/Start (85.3s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-080607 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-080607 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m25.298661515s)
--- PASS: TestPause/serial/Start (85.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223924 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-223924 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (88.581293ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-223924] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19616
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19616-1589418/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19616-1589418/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (34.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223924 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-223924 --driver=docker  --container-runtime=docker: (34.359889882s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-223924 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (34.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.81s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-080607 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-080607 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.789421716s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223924 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-223924 --no-kubernetes --driver=docker  --container-runtime=docker: (16.288226659s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-223924 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-223924 status -o json: exit status 2 (328.139274ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-223924","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-223924
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-223924: (1.768912499s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223924 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-223924 --no-kubernetes --driver=docker  --container-runtime=docker: (8.777654065s)
--- PASS: TestNoKubernetes/serial/Start (8.78s)

                                                
                                    
x
+
TestPause/serial/Pause (1.35s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-080607 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-080607 --alsologtostderr -v=5: (1.354550182s)
--- PASS: TestPause/serial/Pause (1.35s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.55s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-080607 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-080607 --output=json --layout=cluster: exit status 2 (546.990394ms)

                                                
                                                
-- stdout --
	{"Name":"pause-080607","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-080607","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.55s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-080607 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.94s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-080607 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.94s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.29s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-080607 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-080607 --alsologtostderr -v=5: (2.286396869s)
--- PASS: TestPause/serial/DeletePaused (2.29s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-080607
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-080607: exit status 1 (25.288796ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-080607: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-223924 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-223924 "sudo systemctl is-active --quiet service kubelet": exit status 1 (341.981674ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-223924
E0912 22:36:36.864714 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-223924: (1.249970081s)
--- PASS: TestNoKubernetes/serial/Stop (1.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-223924 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-223924 --driver=docker  --container-runtime=docker: (8.829051859s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-223924 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-223924 "sudo systemctl is-active --quiet service kubelet": exit status 1 (312.788449ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (103.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1404152139 start -p stopped-upgrade-503019 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1404152139 start -p stopped-upgrade-503019 --memory=2200 --vm-driver=docker  --container-runtime=docker: (51.178204084s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1404152139 -p stopped-upgrade-503019 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1404152139 -p stopped-upgrade-503019 stop: (11.327257387s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-503019 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-503019 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (40.879694427s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (103.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m23.764646012s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-503019
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-503019: (2.367312182s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (59.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (59.097079538s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (59.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-831371 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-831371 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8pr24" [77138999-5d7a-478f-997c-5a6e09196ada] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8pr24" [77138999-5d7a-478f-997c-5a6e09196ada] Running
E0912 22:47:44.515973 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004916586s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-hxxp2" [3e6a4689-7e35-4200-a039-ce378cdca8d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004391744s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-831371 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-831371 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-831371 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6lhdf" [47cbaa86-8a1c-4e59-94da-0d3eea6bdb0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6lhdf" [47cbaa86-8a1c-4e59-94da-0d3eea6bdb0b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004596439s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-831371 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (87.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m27.798052109s)
--- PASS: TestNetworkPlugins/group/calico/Start (87.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0912 22:48:33.783726 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:49:05.615143 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m4.74102755s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-831371 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-831371 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f694k" [dcbcf30c-c579-4207-b066-99fb9b90f865] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f694k" [dcbcf30c-c579-4207-b066-99fb9b90f865] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004795023s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wh76c" [af3a943d-8688-449a-908e-de53d41f847c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00510277s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-831371 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-831371 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-831371 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z6mps" [229dce84-9b5b-41a3-a114-6100b9d14bdd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z6mps" [229dce84-9b5b-41a3-a114-6100b9d14bdd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.007634807s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-831371 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (86.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m26.04093698s)
--- PASS: TestNetworkPlugins/group/false/Start (86.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (53.42714278s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-831371 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-831371 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7l625" [6d793f80-b9b7-4954-9109-700ea1047996] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7l625" [6d793f80-b9b7-4954-9109-700ea1047996] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003802368s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-831371 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-831371 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-831371 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bj8x7" [1509e256-41c4-488b-af8d-0b383e225a4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bj8x7" [1509e256-41c4-488b-af8d-0b383e225a4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.005642727s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-831371 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (59.880379454s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (82.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0912 22:52:35.818138 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:35.824501 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:35.836438 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:35.857795 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:35.899149 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:35.980861 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:36.142203 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:36.463767 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:37.105712 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:38.387799 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:40.949927 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:44.516039 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:45.117196 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:45.123558 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:45.134884 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:45.156235 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:45.197617 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:45.279000 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:45.440347 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:45.762062 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:46.071598 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:46.404143 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:47.685652 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:50.247198 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m22.880542681s)
--- PASS: TestNetworkPlugins/group/bridge/Start (82.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pn94l" [d8b782db-1083-41c2-bfe0-dfed26d00f42] Running
E0912 22:52:55.369950 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:52:56.313254 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006386271s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-831371 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-831371 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gs7kr" [0f09b076-5d00-4f90-9b98-16f697e58f57] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gs7kr" [0f09b076-5d00-4f90-9b98-16f697e58f57] Running
E0912 22:53:05.611943 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004031955s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-831371 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (85.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0912 22:53:33.783722 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-831371 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m25.235120537s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (85.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-831371 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-831371 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fgs8f" [bc544f1a-a452-478a-b57c-d33fcabb6256] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fgs8f" [bc544f1a-a452-478a-b57c-d33fcabb6256] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005311258s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-831371 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (145.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-693983 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0912 22:54:31.676626 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:31.683004 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:31.694416 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:31.715777 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:31.757176 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:31.838585 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:32.000057 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:32.321823 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:32.963642 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:34.245264 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:36.806611 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:38.769162 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:38.775507 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:38.786854 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:38.808206 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:38.849589 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:38.931027 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:39.092483 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:39.414194 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:40.055896 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:41.337774 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:41.928556 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:43.899925 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:49.021281 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:54:52.169784 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-693983 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m25.170518799s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (145.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-831371 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-831371 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hdvjg" [5f219f5a-3a27-463e-9bef-95e73fb14fcf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0912 22:54:59.262622 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-hdvjg" [5f219f5a-3a27-463e-9bef-95e73fb14fcf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.010814829s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-831371 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-831371 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.18s)
E0912 23:07:02.183698 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:07:03.384436 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:07:07.305079 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:07:17.547422 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:07:35.817698 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (86.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-857954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 22:55:53.614232 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:00.706240 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:17.504545 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:17.510904 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:17.522309 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:17.544016 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:17.585383 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:17.666766 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:17.828206 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:18.150437 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:18.792292 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:20.073556 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:22.635326 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:27.756724 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:33.283899 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:33.290275 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:33.301785 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:33.323151 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:33.364514 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:33.445960 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:33.607432 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:33.929320 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:34.571490 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-857954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m26.144712341s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (86.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-693983 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7b7e5303-841d-45f9-a0db-6bcf8b10782d] Pending
E0912 22:56:35.852884 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [7b7e5303-841d-45f9-a0db-6bcf8b10782d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0912 22:56:37.998700 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:56:38.414792 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [7b7e5303-841d-45f9-a0db-6bcf8b10782d] Running
E0912 22:56:43.536588 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.005300101s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-693983 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-693983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-693983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.128880464s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-693983 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-693983 --alsologtostderr -v=3
E0912 22:56:53.778349 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-693983 --alsologtostderr -v=3: (11.074778679s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-857954 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2b6f43a7-4381-4162-b8bc-301daf71b34a] Pending
helpers_test.go:344: "busybox" [2b6f43a7-4381-4162-b8bc-301daf71b34a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2b6f43a7-4381-4162-b8bc-301daf71b34a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004371215s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-857954 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-693983 -n old-k8s-version-693983
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-693983 -n old-k8s-version-693983: exit status 7 (74.378004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-693983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-693983 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0912 22:56:58.480714 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-693983 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m20.241587062s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-693983 -n old-k8s-version-693983
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-857954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-857954 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.530178326s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-857954 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-857954 --alsologtostderr -v=3
E0912 22:57:14.260233 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:15.536397 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-857954 --alsologtostderr -v=3: (10.996027686s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-857954 -n no-preload-857954
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-857954 -n no-preload-857954: exit status 7 (142.079889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-857954 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-857954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 22:57:22.627956 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:27.593198 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:35.817766 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:39.442512 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:44.515540 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:45.116513 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:50.333297 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:50.339818 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:50.351198 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:50.372550 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:50.413985 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:50.495440 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:50.656904 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:50.978673 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:51.620687 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:52.902514 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:55.221539 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:57:55.464292 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:00.585958 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:03.520597 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:10.828120 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:12.821421 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:31.310309 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:33.784290 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:34.650751 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:34.657244 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:34.668707 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:34.690111 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:34.731623 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:34.812970 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:34.974890 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:35.296171 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:35.938395 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:37.220096 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:39.781908 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:44.903709 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:58:55.145177 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:01.364025 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:05.615251 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:12.271733 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:15.628212 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:17.143786 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-857954 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.805699398s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-857954 -n no-preload-857954
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bvlb8" [48dacfd4-6f19-4475-ad78-c2f8eac8cbdf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004950232s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-bvlb8" [48dacfd4-6f19-4475-ad78-c2f8eac8cbdf] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0042383s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-693983 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-693983 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-693983 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-693983 -n old-k8s-version-693983
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-693983 -n old-k8s-version-693983: exit status 2 (337.563026ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-693983 -n old-k8s-version-693983
E0912 22:59:31.676415 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-693983 -n old-k8s-version-693983: exit status 2 (315.047637ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-693983 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-693983 -n old-k8s-version-693983
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-693983 -n old-k8s-version-693983
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (47.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-312873 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 22:59:38.769167 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:54.907920 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:54.914319 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:54.925682 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:54.947067 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:54.988414 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:55.069782 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:55.231340 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:55.552906 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:56.194728 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:56.590433 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:57.476020 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 22:59:59.378616 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:00:00.037720 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:00:05.163750 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:00:06.469745 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:00:15.405289 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-312873 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (47.392143519s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (47.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-312873 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6eb63eab-1243-4730-b0ab-96ecb7603214] Pending
helpers_test.go:344: "busybox" [6eb63eab-1243-4730-b0ab-96ecb7603214] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6eb63eab-1243-4730-b0ab-96ecb7603214] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.006562083s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-312873 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-312873 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-312873 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-312873 --alsologtostderr -v=3
E0912 23:00:34.193826 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:00:35.887475 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-312873 --alsologtostderr -v=3: (11.04994336s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312873 -n embed-certs-312873
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312873 -n embed-certs-312873: exit status 7 (68.176645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-312873 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (289.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-312873 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 23:01:16.849626 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:17.504584 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:18.511743 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:33.284012 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:35.676964 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:35.683474 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:35.694999 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:35.716394 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:35.757757 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:35.839441 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:36.000846 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:36.327470 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:36.968745 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:38.251198 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:40.812977 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:45.205902 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:01:45.935525 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-312873 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m49.018322238s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-312873 -n embed-certs-312873
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (289.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zjq5n" [2a746cb6-c2cb-4c02-ba70-714c40a8cbc1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004403463s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zjq5n" [2a746cb6-c2cb-4c02-ba70-714c40a8cbc1] Running
E0912 23:01:56.177068 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00490229s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-857954 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-857954 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-857954 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-857954 -n no-preload-857954
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-857954 -n no-preload-857954: exit status 2 (431.935408ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-857954 -n no-preload-857954
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-857954 -n no-preload-857954: exit status 2 (475.572593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-857954 --alsologtostderr -v=1
E0912 23:02:00.985223 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-857954 -n no-preload-857954
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-857954 -n no-preload-857954
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-835301 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 23:02:16.658949 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:02:35.817604 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/auto-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:02:38.771307 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:02:44.515557 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:02:45.116637 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-835301 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (44.403748552s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-835301 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [df0181d5-c2c8-408e-8fa7-4e4207901a26] Pending
E0912 23:02:50.333270 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [df0181d5-c2c8-408e-8fa7-4e4207901a26] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [df0181d5-c2c8-408e-8fa7-4e4207901a26] Running
E0912 23:02:57.620573 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004483747s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-835301 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-835301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-835301 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.055744791s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-835301 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-835301 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-835301 --alsologtostderr -v=3: (10.996227695s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-835301 -n default-k8s-diff-port-835301
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-835301 -n default-k8s-diff-port-835301: exit status 7 (74.750731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-835301 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-835301 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 23:03:18.035819 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:03:33.783831 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/addons-648158/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:03:34.650930 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:04:02.353141 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/bridge-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:04:05.614333 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/skaffold-614913/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:04:19.542146 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:04:31.677202 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/custom-flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:04:38.769631 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/calico-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:04:54.908427 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:05:22.613158 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kubenet-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-835301 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.690596589s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-835301 -n default-k8s-diff-port-835301
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9jrqj" [89104938-bae0-495e-ac33-74b87769bf23] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003380959s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9jrqj" [89104938-bae0-495e-ac33-74b87769bf23] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004343939s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-312873 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-312873 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-312873 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-312873 -n embed-certs-312873
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-312873 -n embed-certs-312873: exit status 2 (337.022617ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-312873 -n embed-certs-312873
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-312873 -n embed-certs-312873: exit status 2 (325.414756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-312873 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-312873 -n embed-certs-312873
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-312873 -n embed-certs-312873
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-854114 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0912 23:06:17.503843 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/enable-default-cni-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-854114 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (40.379633804s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-854114 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-854114 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.169516552s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-854114 --alsologtostderr -v=3
E0912 23:06:33.283998 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/false-831371/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:06:35.677305 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/old-k8s-version-693983/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-854114 --alsologtostderr -v=3: (5.730421188s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-854114 -n newest-cni-854114
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-854114 -n newest-cni-854114: exit status 7 (69.958229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-854114 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-854114 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-854114 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (17.662259183s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-854114 -n newest-cni-854114
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-854114 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-854114 --alsologtostderr -v=1
E0912 23:06:57.051291 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:06:57.058032 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:06:57.069397 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:06:57.090760 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:06:57.132164 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:06:57.213453 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:06:57.375681 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-854114 -n newest-cni-854114
E0912 23:06:57.697149 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-854114 -n newest-cni-854114: exit status 2 (335.333397ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-854114 -n newest-cni-854114
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-854114 -n newest-cni-854114: exit status 2 (334.611788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-854114 --alsologtostderr -v=1
E0912 23:06:58.339557 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-854114 -n newest-cni-854114
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-854114 -n newest-cni-854114
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-whcm8" [5675b388-5e1c-4633-9a87-fe7e43a07c13] Running
E0912 23:07:38.029565 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/no-preload-857954/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00412017s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-whcm8" [5675b388-5e1c-4633-9a87-fe7e43a07c13] Running
E0912 23:07:44.515998 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/functional-537030/client.crt: no such file or directory" logger="UnhandledError"
E0912 23:07:45.117135 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/kindnet-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003889912s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-835301 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-835301 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-835301 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-835301 -n default-k8s-diff-port-835301
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-835301 -n default-k8s-diff-port-835301: exit status 2 (313.608749ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-835301 -n default-k8s-diff-port-835301
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-835301 -n default-k8s-diff-port-835301: exit status 2 (352.82095ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-835301 --alsologtostderr -v=1
E0912 23:07:50.333806 1594794 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19616-1589418/.minikube/profiles/flannel-831371/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-835301 -n default-k8s-diff-port-835301
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-835301 -n default-k8s-diff-port-835301
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.80s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.5s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-565752 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-565752" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-565752
--- SKIP: TestDownloadOnlyKic (0.50s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-831371 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-831371" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-831371

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-831371" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-831371"

                                                
                                                
----------------------- debugLogs end: cilium-831371 [took: 4.997898435s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-831371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-831371
--- SKIP: TestNetworkPlugins/group/cilium (5.17s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-662655" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-662655
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard